LuckyShot
A powerful CLI tool that enhances code understanding and automation by finding the most relevant files in your codebase for AI-assisted programming.
Table of Contents
- Why This Tool?
- Warnings
- Features
- Hyperparameters
- Installation
- Usage
- Environment Setup
- Hybrid Algorithm
- Contributing
- License
Why This Tool?
Finding the right files to manipulate with AI is crucial for effective code generation and modification. Traditional approaches like grep or fuzzy finding often miss semantically relevant files that don't contain exact keyword matches.
This tool uses a hybrid approach combining two powerful search techniques:
-
BM25 Ranking: A battle-tested information retrieval algorithm (used by search engines) that excels at keyword matching while accounting for term frequency and document length. It's particularly good at finding files containing specific technical terms or function names.
-
RAG (Retrieval Augmented Generation) with Embedding Distance: Uses OpenAI's embeddings to capture the semantic meaning of both your query and codebase. By measuring vector dot product distances, it can find conceptually related files even when they use different terminology.
The hybrid scoring system combines both approaches:
- BM25 helps catch direct matches and technical terms
- Embedding distance captures semantic relationships and higher-level concepts
- Results are normalized and merged to give you the most relevant files for your task
This dual approach helps ensure you don't miss important context when using AI to modify your codebase.
Warnings!
⚠️ This tool is alpha and not thoroughly evaluated with real-world tests. Be aware of the costs of embedding vectors!
Features
- File scanning with customizable chunk sizes and overlap
- Semantic search using OpenAI embeddings and BM25 ranking
- Support for piped input and file suggestions
- Intelligent context expansion
- Supports Unix-philosophy piped commands
Hyperparameters
The tool allows for the adjustment of several hyperparameters to fine-tune its performance:
- Chunk Size: Determines the size of the code chunks used during scanning. Larger chunks may capture more context but can be less precise.
- Chunk Overlap: Controls the overlap between consecutive chunks. Increasing overlap can help capture context that spans across chunk boundaries.
- Filter Similarity: Sets the threshold for similarity scores when suggesting files. A higher threshold will result in fewer, more relevant suggestions.
These hyperparameters can be adjusted via command-line options to suit different use cases and codebases. Experimenting with these values can help optimize the tool's performance for your specific needs.
Installation
Usage
Scanning Files
Generate embeddings for your codebase using the scan command:
# Basic scan of all Rust files
# Basic scan of all Rust and Markdown files
# Scan with chunking enabled
# Include file metadata in embeddings
# Scan with all options
The scan command:
- Finds files matching your pattern (respecting .gitignore)
- Generates embeddings using OpenAI's API
- Saves results to
.luckyshot.file.vectors.v1
Finding Relevant Files
To find files related to a topic or question:
# Basic file suggestion
# Using piped input
|
# Filter results by similarity score (matches >= specified value, range 0.0 to 1.0)
# Show detailed information including similarity scores
# Show file contents of matches
# Limit number of results
# Combine options
# Chain commands Unix-style
| \
| \
This will:
- Convert your query into an embedding
- Use cross-product ranking to find similar file embedding
- Display relevant files with similarity scores
Expanding Context
To expand a query with additional context:
Environment Setup
You'll need an OpenAI API key. Either:
Or create a .env file:
OPENAI_API_KEY=your-api-key
Hybrid Algorithm
The tool uses a novel hybrid approach combining BM25 and embedding-based similarity:
-
BM25 Scoring: Produces both positive and negative scores
- Positive scores indicate strong term matches
- Negative scores suggest term absence/dissimilarity
- Range varies based on document collection
-
Embedding Dot Product: Always produces positive scores
- Higher values indicate semantic similarity
- Range typically 0 to 1 after normalization
-
Score Normalization:
- BM25: Normalized to [-1, 1] range preserving sign
- Embeddings: Normalized to [0, 1] range
- Maintains relative importance within each scoring method
-
Hybrid Scoring:
- Currently uses simple averaging: (normalized_bm25 + normalized_embedding) / 2
- Future plans include configurable weighting parameter
- Additional tokenization options coming soon
This hybrid approach helps balance exact keyword matching (BM25) with semantic understanding (embeddings).
Contributing
Contributions are welcome! Please fork the repository and submit a pull request. For major changes, please open an issue first to discuss what you would like to change.
License
MIT
For more details, see the LICENSE file.