🦜️🔗LangChain Rust
⚡ Building applications with LLMs through composability, with Rust! ⚡
🤔 What is this?
This is the Rust language implementation of LangChain.
Version
PLS use the version 2.0.0 This is the most stable release
Quickstart
Setup
Installation
Building with LangChain
LangChain enables building application that connect external sources of data and computation to LLMs. In this quickstart, we will walk through a few different ways of doing that. We will start with a simple LLM chain, which just relies on information in the prompt template to respond. Finally, we will build an agent - which utilizes an LLM to determine whether or not it needs to use a tool like a calculator.
LLM Chain
We'll show how to use models available via API, like OpenAI.
Accessing the API requires an API key, which you can get by creating an account and heading here. Once we have a key we'll want to set it as an environment variable by running:
We can then initialize the model:
let open_ai = default;
If you'd prefer not to set an environment variable you can pass the key in directly via the openai_api_key
named parameter when initiating the OpenAI LLM class:
let open_ai = default.with_api_key;
Once you've installed and initialized the LLM of your choice, we can try using it! Let's ask it what LangSmith is - this is something that wasn't present in the training data so it shouldn't have a very good response.
let resp=open_ai.invoke.await.unwrap;
We can also guide it's response with a prompt template. Prompt templates are used to convert raw user input to a better input to the LLM.
let prompt = message_formatter!;
We can now combine these into a simple LLM chain:
let chain = new
.prompt
.llm
.build;
We can now invoke it and ask the same question. It still won't know the answer, but it should respond in a more proper tone for a technical writer!
match chain.invoke.await
If you want to prompt to have a list of messages you could use the fmt_placeholder
macro
let prompt = message_formatter!;
And when calling to the chain send the message
match chain.invoke.await
Conversational Chain
Now we well create a conversational chain with memory, by default , the conversation chain comes with a simple memory, will be inject that as an example, if you dont want the conversation chain to have memory you could inject the DummyMemroy
let llm = default.with_model;
let memory= new;
let chain = new
.llm
.memory
.build
.expect;
let input_variables = prompt_args! ;
match chain.invoke.await
let input_variables = prompt_args! ;
match chain.invoke.await
}
Agent
We've so far create examples of chains - where each step is known ahead of time. The final thing we will create is an agent - where the LLM decides what steps to take.
First whe sould create a tool, i will create a mock tool
Then whe create the agent with memory
let llm = default.with_model;
let memory = new;
let tool_calc = Calc ;
let agent = new
.tools
.output_parser
.build
.unwrap;
let input_variables = prompt_args! ;
let executor = from_agent.with_memory;
match executor.invoke.await
let input_variables = prompt_args! ;
match executor.invoke.await
Vectore Stores
The vector stores allow you to save embedding data to a database, and retrive it
Using embedding you can search base on context
async
Or you can use the powerful macros
async