gh_models 🧠
gh_models is a Rust client for accessing GitHub-hosted AI models via the https://models.github.ai API. It provides a simple interface for chat-based completions, similar to OpenAI’s API, but powered by GitHub’s model infrastructure.
✨ Features
- Chat completion support for GitHub-hosted models (e.g.
openai/gpt-4o) - Easy authentication via GitHub personal access token (PAT)
- Async-ready with
tokio - Clean and ergonomic API
🚀 Getting Started
1. Install
Add to your Cargo.toml:
= "0.1.0"
2. Authenticate
Set your GitHub token as an environment variable:
🔐 You can generate a PAT in your GitHub settings:
Managing your personal access tokens
📦 Example
use ;
use env;
async
To run this example:
📚 API Overview
GHModels::new(token: String)
Creates a new client using your GitHub token.
chat_completion(...)
Sends a chat request to the model endpoint. Parameters:
model: Model name (e.g."openai/gpt-4o")messages: A list ofChatMessagestructstemperature: Sampling temperaturemax_tokens: Maximum output tokenstop_p: Nucleus sampling parameter
🛠️ Development
Clone the repo and run examples locally:
📄 License
MIT © Pjdur
🤝 Contributing
Pull requests welcome! If you’d like to add streaming support, error handling, or model introspection, feel free to open an issue or PR.