cooklang-import
A command-line tool to import recipes into Cooklang format using AI-powered conversion.
Features
- Multi-provider AI support: OpenAI, Anthropic Claude, Azure OpenAI, Google Gemini, and Ollama (local Llama models)
- Automatic fallback: Seamlessly switch between providers on failure
- Flexible configuration: TOML-based config with environment variable overrides
- Smart extraction: Supports JSON-LD, HTML class-based, and plain-text extraction
- Metadata preservation: Automatically extracts and includes recipe metadata
- Local AI support: Run completely offline with Ollama
Getting started
Prerequisites
- Rust
- An AI provider (choose one or more):
- Free & Local: Ollama for running Llama models on your machine
- Cloud Options:
Installation
Configuration
Quick Start (Environment Variables Only)
Set your API key as an environment variable:
The tool will work immediately with OpenAI's GPT-4.1-mini model (October 2025).
Advanced Configuration (config.toml)
For multi-provider support and advanced features, create a config.toml file:
Edit config.toml to configure your preferred providers:
# Default provider to use
= "openai"
# OpenAI Configuration
[]
= true
= "gpt-4.1-mini" # Fast and cost-effective. Use "gpt-4.1-nano" for lowest latency
= 0.7
= 2000
# API key loaded from OPENAI_API_KEY environment variable
# or set here: api_key = "sk-..."
# Anthropic Claude Configuration
[]
= true
= "claude-sonnet-4.5" # Use "claude-haiku-4.5" for faster/cheaper option
= 0.7
= 4000
# API key loaded from ANTHROPIC_API_KEY environment variable
# Provider Fallback Configuration
[]
= true
= ["openai", "anthropic"]
= 3
= 1000
Configuration Priority
Configuration is loaded with the following priority (highest to lowest):
- Environment variables (e.g.,
OPENAI_API_KEY,COOKLANG__PROVIDERS__OPENAI__MODEL) config.tomlfile in current directory- Default values
Environment Variable Format
For nested configuration, use double underscores:
Usage Examples
Use Case 1: URL → Cooklang (Default)
Fetch a recipe from a URL and convert to Cooklang format:
Use Case 2: URL → Recipe (Extract Only)
Download and extract recipe data without AI conversion:
This outputs the raw ingredients and instructions in markdown format without Cooklang markup.
Use Case 3: Markdown → Cooklang
Convert structured markdown recipes to Cooklang format (when you have pre-separated ingredients and instructions):
Use Case 4: Text → Cooklang (NEW!)
Convert plain text recipes to Cooklang format (LLM will parse and structure the recipe):
This is useful for unstructured recipe text where ingredients and instructions are not clearly separated.
Advanced Options
Custom LLM Provider
Use a different LLM provider (requires config.toml):
Available providers: openai, anthropic, google, azure_openai, ollama
Custom Timeout
Set a custom timeout for HTTP requests:
Combined Options
Combine multiple options:
CLI Help
For complete usage information:
Supported AI Providers
OpenAI
- Models: gpt-4.1-mini (default, Oct 2025), gpt-4.1-nano (fastest), gpt-4o-mini, gpt-4o
- Environment Variable:
OPENAI_API_KEY - Configuration: See
config.toml.example
Anthropic Claude
- Models: claude-sonnet-4.5 (Sep 2025), claude-haiku-4.5 (fastest, Oct 2025), claude-opus-4.1
- Environment Variable:
ANTHROPIC_API_KEY - Configuration: See
config.toml.example
Azure OpenAI
- Models: Your deployed models (e.g., gpt-4, gpt-35-turbo)
- Environment Variable:
AZURE_OPENAI_API_KEY - Required Config:
endpoint,deployment_name,api_version
Google Gemini
- Models: gemini-2.5-flash (latest, Sep 2025), gemini-2.0-flash-lite
- Environment Variable:
GOOGLE_API_KEY - Configuration: See
config.toml.example
Ollama (Local Llama Models)
- Models: llama3, llama2, codellama, mixtral, and more
- Requirements: Ollama installed locally
- No API Key Required: Runs entirely on your machine
- Base URL:
http://localhost:11434(default) - Setup:
- Install Ollama:
curl -fsSL https://ollama.ai/install.sh | sh - Pull a model:
ollama pull llama3 - Configure in
config.tomlor start using immediately
- Install Ollama:
Provider Fallback
Enable automatic fallback between providers for reliability:
[]
= true
= ["openai", "anthropic", "google"]
= 3
= 1000
When enabled, the tool will:
- Try the primary provider with exponential backoff retries
- If all retries fail, automatically switch to the next provider
- Continue until a provider succeeds or all providers are exhausted
Migration from Environment Variables Only
If you're upgrading from a version that only used environment variables:
- No action required - Environment variables continue to work
- Optional: Create
config.tomlfor advanced features - Keep your
OPENAI_API_KEYin environment variables for security
Troubleshooting
"OPENAI_API_KEY must be set"
Set your API key:
"No providers available in fallback configuration"
Ensure at least one provider is:
- Enabled in
config.toml(enabled = true) - Included in
fallback.order - Has a valid API key configured
Rate Limiting
If you encounter rate limits:
- Enable fallback to use multiple providers
- Increase
retry_delay_msin config - Use a different provider temporarily
Library Usage
cooklang-import can also be used as a Rust library in your own projects.
Installation
Add to your Cargo.toml:
[]
= "0.7.0"
= { = "1.0", = ["full"] }
API Overview
The library provides four main use cases:
- Builder API (recommended): Flexible, type-safe builder pattern with fluent interface
- Convenience Functions: Simple high-level functions for common use cases
- Low-level API: Direct access to fetching and conversion functions
Builder API
The builder API provides the most control and flexibility:
use ;
async
Convenience Functions
For simple use cases, use the convenience functions:
use ;
async
Advanced Builder Options
The builder supports additional configuration including custom providers and timeouts:
use ;
use Duration;
// Use a custom LLM provider (requires config.toml with provider settings)
let result = builder
.url
.provider
.build
.await?;
// Set a custom timeout for network requests
let result = builder
.url
.timeout
.build
.await?;
// Combine both options
let result = builder
.url
.provider
.timeout
.build
.await?;
Available Providers:
LlmProvider::OpenAI- OpenAI GPT models (default if no config)LlmProvider::Anthropic- Claude modelsLlmProvider::Google- Gemini modelsLlmProvider::AzureOpenAI- Azure OpenAI serviceLlmProvider::Ollama- Local Llama models via Ollama
Note: Custom providers require a config.toml file with appropriate provider configuration. See the main Configuration section for details.
Error Handling
The library provides structured error types:
use ;
match builder.url.build.await
Examples
See the examples/ directory for complete examples:
builder_basic.rs- Basic builder usage for all three use casessimple_api.rs- Using convenience functionsbuilder_advanced.rs- Advanced features like custom providers and error handling
Run examples with:
Development
Run tests:
Run with debug logging:
RUST_LOG=debug