opentslm 0.1.0

Rust implementation of OpenTSLM using Burn, WGPU, and llama.cpp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
//! LLM backend wrappers and the OpenTSLM soft-prompt model.
//!
//! # Sub-modules
//!
//! - [`llama_cpp`] — thin safe wrapper around the `llama-cpp-4` crate,
//!   providing tokenisation, single-forward-pass logit extraction, and
//!   autoregressive generation.  The frozen GGUF model lives here.
//!
//! - [`opentslm_sp`] — the trainable OpenTSLM SP (soft-prompt) variant.
//!   Holds [`TrainableComponents`](opentslm_sp::TrainableComponents) (encoder
//!   + logit-head) and orchestrates the full encode → bias → generate /
//!   compute-loss pipeline.

pub mod llama_cpp;
pub mod opentslm_sp;