Overview

Claude Code is Anthropic’s AI coding and task assistant, available as a CLI, VS Code extension, and desktop app. Beyond coding, it functions as a general-purpose agent for vault management, research, automation, and multi-agent orchestration. Key differentiators: persistent memory, Computer Use, Dispatch (mobile-to-desktop), and a growing ecosystem of features.

Key Features

Memory System

  • Auto Memory — Claude automatically writes memory notes about project preferences and corrections to ~/.claude/projects/<project>/memory/
  • Auto Dream — background memory consolidation (undisclosed feature, working as of early 2026). Reviews session transcripts, prunes stale/contradictory memories, merges updates. Triggered after 24h + 5 sessions. Three phases: Orientation → Signal Gathering → Consolidation & Pruning
  • MEMORY.md — should be an index pointing to sub-memory files, not contain memories itself
  • Analogy: Auto Memory = taking notes during the day; Auto Dream = REM sleep consolidating them

Computer Use

  • Claude can control screen, open apps, interact with native OS (Mac/Windows)
  • Available via Claude Desktop app
  • Relevant to this vault: can automate Obsidian actions, file operations, browser tasks

Claude Code Interfaces

  • CLI (claude in terminal) — full feature access, scripting, MCP config
  • VS Code extension — IDE-integrated panel, inline plan review, conversation history
  • Desktop app — Computer Use, Dispatch, live app preview, scheduled tasks, GitHub connectors
  • All share conversation history; can switch between them

Token Optimisation & Limit Management

Claude’s limits work on two axes: message limits (conversation turns per period) and token consumption (context window used per turn). Most users hit message limits not because of volume but because they’re burning tokens inefficiently.

Why limits are hit faster than expected:

  • Long conversation threads accumulate context — every turn re-sends the entire prior conversation
  • Large file reads or code dumps are expensive; Claude re-reads them each time
  • Agentic loops (Computer Use, multi-step tasks) consume tokens at a multiplied rate

Key strategies for extending sessions:

  1. Start fresh threads for new tasks — don’t continue old conversations; each new thread starts with zero context overhead
  2. Use CLAUDE.md / handoff files — encode persistent project context in files so Claude can reload it selectively rather than scrolling through prior turns
  3. Summarise before continuing — if a thread is getting long, ask Claude to summarise the conversation into a compact block, then start a new thread with that summary as context
  4. Avoid redundant file reads — reference specific sections instead of reading entire large files; Claude retains what it’s read in the current turn
  5. Compact task descriptions — be precise in instructions; vague prompts lead Claude to generate exploratory tokens before getting to the work

The real insight: context length ≠ quality. A well-structured short prompt with the right files referenced outperforms a long rambling thread every time. The limit problem is almost always a context hygiene problem, not a plan limit problem.

Projects & Memory (Claude.ai)

  • Projects = folders with persistent context files, instructions, and memory — available on Pro plan and above
  • Upload a one-pager doc per project (who you are, goals, pain points) — Claude tailors all responses accordingly
  • Separate chats within a project each share the project context + memory
  • Memory now reads across multiple chats in a project (upgraded early 2026); addresses the “starts fresh” complaint
  • Memory stacks over time — no need to re-explain yourself per session

Artifacts

  • Visual, interactive HTML/JS outputs rendered inside the app
  • Can be exported as standalone mini-websites
  • Use cases: dashboards, habit trackers, personal operating systems, landing pages, financial tools
  • Built via normal chat — does NOT require the Code tab or terminal
  • Key differentiator vs ChatGPT for non-technical users

Connectors

  • Native integrations: Gmail, Google Calendar, Google Drive
  • Claude pulls data directly into the conversation without manual file upload
  • Combine with Co-work for automated data pipelines

Claude Skills

  • Slash commands for repeatable tasks within a project chat (e.g. /daily-brief)
  • Train once → callable on demand in that chat context
  • Now integrated with Slack, Notion, and other major apps
  • Relevant for small-business workflow automation without enterprise pricing

Pricing Tiers (as of early 2026)

TierCostLimitBest for
Free$0Soft daily capsCasual use, Sonnet only
Pro$20/mo5× freeMost users; Projects, Research, Opus, Claude Code
Max$100/mo5× ProHeavy Claude Code users, large context
Ultra$200/mo20× ProAll-day power users
EnterpriseCustomCustomLarge orgs

Synthesis

Claude Code is increasingly the primary tool for managing and growing this vault. The agent architecture (personas, wiki ingest, memory) built into this vault mirrors how Claude Code itself is designed — memory, consolidation, specialised agents. The Auto Dream feature is directly relevant: the same consolidation logic Claude uses for its own memory is being applied to the vault’s wiki layer.

Orchestrator + Agent Team Pattern

  • An orchestrator agent (e.g. “Larry”) routes tasks to specialised sub-agents, each with their own domain and SOP
  • The inbox is where AI deliverables land for human review — not a notes dump, but a handoff point
  • Specialised agents can handle: community management, HR, research, writing — any repeatable workflow
  • This pattern mirrors the persona system built into this vault (Allan orchestrates, specialists execute)

Contradictions / Open Questions

  • Auto Dream hasn’t been officially announced — may change before GA
  • Computer Use: how reliable is it for Obsidian automation? Worth testing