glimpse-0.6.9 is not a library.
Visit the last successful build:
glimpse-1.0.0
Glimpse
A blazingly fast tool for peeking at codebases. Perfect for loading your codebase into an LLM's context, with built-in token counting support.
Features
- 🚀 Fast parallel file processing
- 🌳 Tree-view of codebase structure
- 📝 Source code content viewing
- 🔢 Token counting with multiple backends
- ⚙️ Configurable defaults
- 📋 Clipboard support
- 🎨 Customizable file type detection
- 🥷 Respects .gitignore automatically
Installation
Using cargo:
Using homebrew:
Using Nix:
# Install directly
# Or use in your flake
{
;
}
Usage
Basic usage:
Common options:
# Show hidden files
# Only show tree structure
# Copy output to clipboard
# Save output to file
# Include specific file types
# Exclude patterns
# Count tokens using tiktoken (OpenAI's tokenizer)
# Use HuggingFace tokenizer with specific model
# Use custom local tokenizer file
CLI Options
Usage: glimpse [OPTIONS] [PATH]
Arguments:
[PATH] Directory/Files to analyze [default: .]
Options:
--interactive Opens interactive file picker (? for help)
-i, --include <PATTERNS> Additional patterns to include (e.g. "*.rs,*.go")
-e, --exclude <PATTERNS> Additional patterns to exclude
-s, --max-size <BYTES> Maximum file size in bytes
--max-depth <DEPTH> Maximum directory depth to traverse
-o, --output <FORMAT> Output format: tree, files, or both
-f, --file <PATH> Save output to specified file
-p, --print Print to stdout instead of clipboard
-t, --threads <COUNT> Number of threads for parallel processing
-H, --hidden Show hidden files and directories
--no-ignore Don't respect .gitignore files
--no-tokens Disable token counting
--tokenizer <TYPE> Tokenizer to use: tiktoken or huggingface
--model <NAME> Model name for HuggingFace tokenizer
--tokenizer-file <PATH> Path to local tokenizer file
-h, --help Print help
-V, --version Print version
Configuration
Glimpse uses a config file located at:
- Linux/macOS:
~/.config/glimpse/config.toml - Windows:
%APPDATA%\glimpse\config.toml
Example configuration:
# General settings
= 10485760 # 10MB
= 20
= "both"
# Token counting settings
= "tiktoken" # Can be "tiktoken" or "huggingface"
= "gpt2" # Default model for HuggingFace tokenizer
# Default exclude patterns
= [
"**/.git/**",
"**/target/**",
"**/node_modules/**"
]
Token Counting
Glimpse supports two tokenizer backends:
-
Tiktoken (Default): OpenAI's tokenizer implementation, perfect for accurately estimating tokens for GPT models.
-
HuggingFace Tokenizers: Supports any model from the HuggingFace hub or local tokenizer files, great for custom models or other ML frameworks.
The token count appears in both file content views and the final summary, helping you estimate context window usage for large language models.
Example token count output:
File: src/main.rs
Tokens: 245
==================================================
// File contents here...
Summary:
Total files: 10
Total size: 15360 bytes
Total tokens: 2456
Troubleshooting
- File too large: Adjust
max_sizein config - Missing files: Check
hiddenflag and exclude patterns - Performance issues: Try adjusting thread count with
-t - Tokenizer errors:
- For HuggingFace models, ensure you have internet connection for downloading
- For local tokenizer files, verify the file path and format
- Try using the default tiktoken backend if issues persist
License
MIT