gh_models 🧠
gh_models is a Rust client for accessing GitHub-hosted AI models via the https://models.github.ai API. It provides a simple interface for chat-based completions, similar to OpenAI’s API, but powered by GitHub’s model infrastructure.
✨ Features
- Chat completion support for GitHub-hosted models (e.g.
openai/gpt-4o) - Easy authentication via GitHub personal access token (PAT)
- Async-ready with
tokio - Clean and ergonomic API
🚀 Getting Started
1. Install
Add to your Cargo.toml:
= "0.2.0"
2. Authenticate
Set your GitHub token as an environment variable:
🔐 Your token must have the Models permission enabled.
You can generate a PAT in your GitHub settings. Managing your personal access tokens
Basic Example (Single‑Turn)
use ;
use env;
async
Run it:
Multi‑Turn Chat Example (REPL)
This example shows how to maintain conversation history and interact with the model in a loop.
use ;
use env;
use ;
async
📚 API Overview
GHModels::new(token: String)
Creates a new client using your GitHub token.
chat_completion(...)
Sends a chat request to the model endpoint.
Parameters:
model: Model name (e.g."openai/gpt-4o")messages: A slice ofChatMessage(&[ChatMessage])temperature: Sampling temperaturemax_tokens: Maximum output tokenstop_p: Nucleus sampling parameter
🛠️ Development
Clone the repo and run examples locally:
📄 License
MIT © Pjdur
🤝 Contributing
Pull requests welcome!
If you’d like to add streaming support, better error handling, or model introspection, feel free to open an issue or PR.