gh_models 0.1.1

A Rust client for GitHub-hosted AI models, compatible with the models.github.ai chat completion API.
Documentation
# gh_models 🧠


**gh_models** is a Rust client for accessing GitHub-hosted AI models via the [`https://models.github.ai`](https://models.github.ai) API. It provides a simple interface for chat-based completions, similar to OpenAI’s API, but powered by GitHub’s model infrastructure.

---

## ✨ Features


- Chat completion support for GitHub-hosted models (e.g. `openai/gpt-4o`)
- Easy authentication via GitHub personal access token (PAT)
- Async-ready with `tokio`
- Clean and ergonomic API

---

## 🚀 Getting Started


### 1. Install


Add to your `Cargo.toml`:

```toml
gh_models = "0.1.0"
```

### 2. Authenticate


Set your GitHub token as an environment variable:

```bash
export GITHUB_TOKEN=your_personal_access_token
```

> 🔐 You can generate a PAT in your GitHub settings:  
> [Managing your personal access tokens]https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens

---

## 📦 Example


```rust
use gh_models::{GHModels, types::ChatMessage};
use std::env;

#[tokio::main]

async fn main() {
    let token = env::var("GITHUB_TOKEN").expect("Missing GITHUB_TOKEN");
    let client = GHModels::new(token);

    let messages = vec![
        ChatMessage {
            role: "system".into(),
            content: "You are a helpful assistant.".into(),
        },
        ChatMessage {
            role: "user".into(),
            content: "What is the capital of France?".into(),
        },
    ];

    let response = client.chat_completion("openai/gpt-4o", messages, 1.0, 4096, 1.0).await.unwrap();
    println!("{}", response.choices[0].message.content);
}
```

To run this example:

```bash
cargo run --example simple_chat
```

---

## 📚 API Overview


### `GHModels::new(token: String)`


Creates a new client using your GitHub token.

### `chat_completion(...)`


Sends a chat request to the model endpoint. Parameters:
- `model`: Model name (e.g. `"openai/gpt-4o"`)
- `messages`: A list of `ChatMessage` structs
- `temperature`: Sampling temperature
- `max_tokens`: Maximum output tokens
- `top_p`: Nucleus sampling parameter

---

## 🛠️ Development


Clone the repo and run examples locally:

```bash
git clone https://github.com/pjdur/gh_models
cd gh_models
cargo run --example simple_chat
```

---

## 📄 License


MIT © [Pjdur](https://github.com/pjdur)

---

## 🤝 Contributing


Pull requests welcome! If you’d like to add streaming support, error handling, or model introspection, feel free to open an issue or PR.