Zen - Topic-Based Spaced Repetition CLI
A modern spaced repetition CLI that uses LLM-powered question generation and evaluation to help you learn any topic effectively.
Features
- Topic-Based Learning: Organize knowledge by keywords instead of individual flashcards
- LLM-Powered Questions: Fresh questions generated for each review session
- Web-Enhanced Questions (Optional): LLM can search the web for up-to-date information when generating questions
- Automatic Grading: LLM evaluates your answers and provides detailed feedback
- FSRS Scheduling: Advanced spaced repetition algorithm for optimal review timing
- 3-Question Review: Each topic tested with 3 different questions per session
- Automatic Rating: Your scores (0-100) automatically convert to SRS ratings
- Beautiful TUI: Clean, intuitive terminal interface
Installation
Quick Start
1. Configure LLM
Create ~/.zen/config.toml:
[]
= "groq"
= "your-groq-api-key"
= "llama-3.3-70b-versatile"
Get a free Groq API key at groq.com
2. Add Topics
# Single keyword
# Multiple related keywords
Note: Comma separates keywords, spaces are part of each keyword.
3. Review Topics
The review process:
- See your topic keywords
- LLM generates a question
- Type your answer (multi-line supported)
- LLM grades your answer (0-100) and provides feedback
- Repeat for 3 questions total
- Average score determines next review date
4. Track Progress
# View all topics
# View only due topics
# See detailed statistics (TUI)
The stats command opens an interactive TUI with two screens:
- Topic Performance: Shows each topic with keywords, last/average scores, and question-wise performance matrix
- Keyword Performance: Shows aggregated statistics for each keyword with performance across all topics
Navigation:
Tab- Switch between Topic and Keyword views↑/↓orj/l- Scroll up/downPgUp/PgDn- Scroll by 10 itemsHome/End- Jump to top/bottomq- Quit
5. Manage Topics
# Delete a topic
How It Works
Topic Reviews
Each review session:
- LLM generates 3 unique questions covering your topic
- You answer each question in the TUI
- LLM evaluates each answer (0-100 score + feedback)
- Average score converts to FSRS rating:
- 90%+ → Easy (long interval)
- 70-89% → Good (medium interval)
- 60-69% → Hard (short interval)
- <60% → Again (very short interval)
Score-to-Rating Conversion
The app automatically determines your rating based on LLM scores:
| Score Range | Rating | Next Review |
|---|---|---|
| 90-100% | Easy | Weeks/months later |
| 70-89% | Good | Days/weeks later |
| 60-69% | Hard | Days later |
| 0-59% | Again | Hours/1 day later |
FSRS Algorithm
Uses the Free Spaced Repetition Scheduler (FSRS) algorithm for optimal review timing:
- Stability: How well you remember
- Difficulty: How hard the topic is for you
- Retrievability: Probability of recall
The algorithm adapts to your performance and schedules reviews at the optimal time for long-term retention.
Commands
Examples
Adding Topics
# Machine Learning concepts
# Programming languages
# Math concepts
# Business concepts
Review Session Example
┌─ Topic 1 of 3 | ID: 7jlHGY ──────────────────┐
│ LSTM, recurrent neural networks, time series │
└───────────────────────────────────────────────┘
┌─ Question 1 of 3 ─────────────────────────────┐
│ How does an LSTM differ from a standard RNN │
│ in terms of handling the vanishing gradient │
│ problem? │
└───────────────────────────────────────────────┘
┌─ Your Answer (Press Space to start typing) ───┐
│ │
└────────────────────────────────────────────────┘
After answering, you'll see:
┌─ Score: 85/100 (Good) ────────────────────────┐
│ Good explanation of gates and memory cells. │
│ Could have mentioned the forget gate's role. │
└────────────────────────────────────────────────┘
Performance Statistics
The zen stats command provides detailed statistics in an interactive TUI with two screens:
Topic Performance Screen
Shows all topics sorted by performance (lowest scores first) with a statistics table on the right:
Topic Performance
Keywords Last Avg Recent Sessions ┃ Topic Keyword
────────────────────────────────────────────────────┃ Total 15 42
LSTM, RNN 65.0 72.5 · · · · · · ✗ ✓ - ┃ Due Today 3 5
· · · · · · ✓ - ✗ ┃ Due Week 8 12
· · · · · · - ✗ ✓ ┃ ─────────────────────────
┃ Reviews 95
rust, ownership 78.3 80.1 · · · · · ✓ - ✓ - ┃ Avg Score 75.2% 74.8%
· · · · · - ✓ ✗ ✓ ┗━━━━━━━━━━━━━━━━━━━━━━━━
· · · · · ✓ - - -
Layout:
- Left: Topic list with performance matrices
- Right: Statistics table comparing Topic and Keyword metrics
Fields:
- Keywords: Topic keywords (no IDs shown)
- Last: Score from most recent review session
- Avg: Overall average score across all reviews
- Recent Sessions: Fixed 10-column × 3-row grid
- Each column = one review session with 3 questions
- Rightmost column = most recent session
- Symbols:
✓Easy (≥90),-Good/Hard (60-89),✗Again (<60),·No data
Keyword Performance Screen
Shows aggregated statistics for each keyword with performance across topics:
╔════════════════════════════════════════════════════════════════╗
║ Keyword Performance ║
║ Keywords: 42 | Due Today: 5 | Due Week: 12 | Avg: 74.8% ║
╚════════════════════════════════════════════════════════════════╝
Keyword Topics Avg Performance by Topic (rightmost = most recent)
──────────────────────────────────────────────────────────────────────────────────────────
LSTM 3 68.5 · · · · · · · ✗ - ✓
· · · · · · · ✓ ✗ -
· · · · · · · - ✓ ✗
Fields:
- Keyword: The keyword text
- Topics: Number of topics containing this keyword
- Avg: Average score across all reviews for this keyword
- Performance Matrix: Fixed 10-column × 3-row grid showing performance across topics
- Each column = 3 questions from one topic's most recent session
- If "LSTM" appears in 3 topics, rightmost 3 columns show those topics
- Shows how this keyword performs across different contexts
- Same color coding: Green (≥90), Yellow (60-89), Red (<60), Gray (no data)
Important:
- When you switch between views using Tab, the summary statistics update to show metrics relevant to that view
- Keyword "Due Today" and "Due Week" counts show unique keywords in due topics
- Both screens are sorted ascending by average score, so topics/keywords that need more practice appear at the top
Tips
Effective Keyword Selection
- Specific: "LSTM architecture" is better than just "AI"
- Related: Group keywords that belong together
- Memorable: Use keywords that trigger the right mental model
Good Topics vs Bad Topics
✅ Good: "React hooks, useState, useEffect, component lifecycle"
- Related concepts
- Right level of granularity
- Clear scope
❌ Bad: "programming"
- Too broad
- No clear scope
- LLM can't generate focused questions
Review Best Practices
- Be honest: Don't look up answers during review
- Type freely: Multi-line answers are encouraged
- Review regularly: The algorithm works best with consistent reviews
- Trust the LLM: The grading is strict but fair
Configuration
LLM Providers
Currently supports:
- Groq (recommended - fast and free tier available)
Configuration file: ~/.zen/config.toml
[]
= "groq"
= "your-api-key"
= "llama-3.3-70b-versatile"
Web Search (Optional)
NEW: Enable web search to generate questions with current, up-to-date information!
The LLM can automatically search the web when it needs recent information about:
- Rapidly changing technologies (frameworks, tools, languages)
- Current best practices and standards
- Latest versions and features
- Recent developments and news
Add to ~/.zen/config.toml:
[]
= "tavily" # or "brave", "serper", "serpapi"
= "your-search-api-key"
Supported providers (all have free tiers):
- Tavily (recommended) - 1,000 free searches/month - tavily.com
- Brave Search - 2,000 free queries/month - brave.com/search/api
- Serper - 2,500 free queries - serper.dev
- SerpAPI - 100 free searches/month - serpapi.com
The LLM intelligently decides when to search - it won't search for evergreen topics like "What is a variable?" but will search for "Latest React 19 features" or "Python 3.13 new syntax".
📖 See WEB_SEARCH_SETUP.md for detailed setup instructions.
Data Location
All data stored in ~/.zen/:
zen.db- SQLite database with topics, schedules, and review historyconfig.toml- Configuration file
Architecture
Database Schema
topics
├── id (TEXT)
├── created_at (TIMESTAMP)
└── modified_at (TIMESTAMP)
topic_keywords
├── topic_id (FK)
├── keyword (TEXT)
└── position (INTEGER)
topic_schedule
├── topic_id (FK)
├── due_date (TIMESTAMP)
├── stability (REAL)
├── difficulty (REAL)
└── retrievability (REAL)
topic_review_logs
├── topic_id (FK)
├── timestamp (TIMESTAMP)
├── rating (1-4)
└── average_score (0-100)
topic_question_logs
├── review_log_id (FK)
├── question_number (1-3)
├── generated_question (TEXT)
├── user_answer (TEXT)
├── llm_score (0-100)
└── llm_feedback (TEXT)
Development
Building
Running Tests
Project Structure
src/
├── main.rs # CLI entry point
├── lib.rs # Module exports
├── commands.rs # Command implementations
├── database.rs # SQLite operations
├── topic.rs # Topic data structures
├── topic_review.rs # Review session logic
├── topic_review_tui.rs # TUI application
├── llm_evaluator.rs # LLM integration
└── config.rs # Configuration management
License
MIT
Contributing
Contributions welcome! Please open an issue or PR.
Roadmap
- Add more LLM providers (OpenAI, Anthropic, local models)
- Export/import topics
- Study streak tracking
- Topic categories/tags
- Mobile app (see ANDROID_GUIDE.md)
- Web interface