Expand description
§riglr-macros
Procedural macros for riglr - dramatically reducing boilerplate when creating blockchain tools.
The #[tool] macro is the cornerstone of riglr’s developer experience, transforming simple async
functions, synchronous functions, and structs into full-featured blockchain tools with automatic error handling, JSON
schema generation, and seamless rig framework integration.
§Overview
The #[tool] macro automatically implements the Tool trait for both async and sync functions, as well as structs,
eliminating the need to write ~30 lines of boilerplate code per tool. It generates:
- Parameter struct with proper JSON schema and serde annotations
- Tool trait implementation with error handling and type conversion
- Documentation extraction from doc comments for AI model consumption
- SignerContext integration for secure blockchain operations
- Convenience constructors for easy instantiation
§Code Generation Process
When you apply #[tool] to a function, the macro performs the following transformations:
§1. Parameter Extraction and Struct Generation
// Your function:
#[tool]
async fn swap_tokens(
/// Source token mint address
from_mint: String,
/// Destination token mint address
to_mint: String,
/// Amount to swap in base units
amount: u64,
/// Optional slippage tolerance (default: 0.5%)
#[serde(default = "default_slippage")]
slippage_bps: Option<u16>,
) -> Result<String, SwapError> { ... }
// Generated args struct:
#[derive(serde::Serialize, serde::Deserialize, schemars::JsonSchema, Debug, Clone)]
#[serde(rename_all = "camelCase")]
pub struct SwapTokensArgs {
/// Source token mint address
pub from_mint: String,
/// Destination token mint address
pub to_mint: String,
/// Amount to swap in base units
pub amount: u64,
/// Optional slippage tolerance (default: 0.5%)
#[serde(default = "default_slippage")]
pub slippage_bps: Option<u16>,
}§2. Tool Struct and Trait Implementation Generation
// Generated tool struct:
#[derive(Clone)]
pub struct SwapTokensTool;
impl SwapTokensTool {
pub fn new() -> Self { Self }
}
#[async_trait::async_trait]
impl riglr_core::Tool for SwapTokensTool {
async fn execute(&self, params: serde_json::Value, context: &riglr_core::provider::ApplicationContext) -> Result<riglr_core::JobResult, riglr_core::ToolError> {
// 1. Parse parameters with detailed error messages
let args: SwapTokensArgs = serde_json::from_value(params)
.map_err(|e| format!("Failed to parse parameters: {}", e))?;
// 2. Call your original function
let result = swap_tokens(args.from_mint, args.to_mint, args.amount, args.slippage_bps).await;
// 3. Convert results to standardized JobResult format
match result {
Ok(value) => Ok(riglr_core::JobResult::Success {
value: serde_json::to_value(value)?,
tx_hash: None,
}),
Err(error) => {
// 4. Structured error handling with retry logic
let tool_error: riglr_core::ToolError = error.into();
match tool_error {
riglr_core::ToolError::Retriable(msg) => Ok(riglr_core::JobResult::Failure {
error: msg,
retriable: true,
}),
riglr_core::ToolError::Permanent(msg) => Ok(riglr_core::JobResult::Failure {
error: msg,
retriable: false,
}),
riglr_core::ToolError::RateLimited(msg) => Ok(riglr_core::JobResult::Failure {
error: format!("Rate limited: {}", msg),
retriable: true,
}),
riglr_core::ToolError::InvalidInput(msg) => Ok(riglr_core::JobResult::Failure {
error: format!("Invalid input: {}", msg),
retriable: false,
}),
riglr_core::ToolError::SignerContext(err) => Ok(riglr_core::JobResult::Failure {
error: format!("Signer error: {}", err),
retriable: false,
}),
}
}
}
}
fn name(&self) -> &str {
"swap_tokens"
}
}
// Convenience constructor
pub fn swap_tokens_tool() -> std::sync::Arc<dyn riglr_core::Tool> {
std::sync::Arc::new(SwapTokensTool::new())
}§3. Documentation Processing and Description Attribute
The macro extracts documentation from three sources and wires them into the Tool implementation:
- Function docstrings → Tool descriptions for AI models
- Parameter docstrings → JSON schema field descriptions
- Type annotations → JSON schema type information
You can also provide an explicit AI-facing description using the attribute:
#[tool(description = "Fetches the URL and returns the body as text.")]
async fn fetch(url: String) -> Result<String, Error> { ... }Priority logic for the generated Tool::description() method:
- If
description = "..."attribute is present, that string is used - Else, the item’s rustdoc comments are used
- Else, an empty string is returned
This enables AI models to understand exactly what each tool does and how to use it properly.
§Constraints and Requirements
§Function Requirements
-
Return Type: Must be
Result<T, E>whereE: Into<riglr_core::ToolError>ⓘ// ✅ Valid - custom error type with derive #[derive(Error, Debug, IntoToolError)] enum MyError { NetworkError(String), InvalidInput(String) } async fn valid_tool() -> Result<String, MyError> { ... } // ❌ Invalid - not a Result async fn invalid_tool() -> String { ... } // ❌ Invalid - std::io::Error doesn't implement Into<ToolError> async fn bad_error() -> Result<String, std::io::Error> { ... } // ✅ Valid - wrap std library errors in custom types #[derive(Error, Debug, IntoToolError)] enum FileError { #[error("IO error: {0}")] Io(#[from] std::io::Error) } async fn good_file_tool() -> Result<String, FileError> { ... } -
Parameters: All parameters must implement
serde::Deserialize + schemars::JsonSchemaⓘ// ✅ Valid - standard types implement these automatically async fn good_params(address: String, amount: u64) -> Result<(), ToolError> { ... } // ❌ Invalid - custom types need derives struct CustomType { field: String } async fn bad_params(custom: CustomType) -> Result<(), ToolError> { ... } -
Function Type: The macro supports both async and synchronous functions
ⓘ// ✅ Valid - async function #[tool] async fn async_tool() -> Result<String, ToolError> { ... } // ✅ Valid - sync function (executed within async context) #[tool] fn sync_tool() -> Result<String, ToolError> { ... }Synchronous functions are automatically wrapped to work within the async Tool trait. They execute synchronously within the async
executemethod. -
Documentation: Function and parameters should have doc comments for AI consumption
ⓘ/// This description helps AI models understand the tool's purpose #[tool] async fn documented_tool( /// This helps the AI understand this parameter param: String, ) -> Result<String, ToolError> { ... }
§Struct Requirements
For struct-based tools, additional requirements apply:
- Execute Method: Must have an async
executemethod returningResult<T, E> - Serde Traits: Must derive
Serialize,Deserialize, andJsonSchema - Clone: Must be
Clonefor multi-use scenarios
#[derive(serde::Serialize, serde::Deserialize, schemars::JsonSchema, Clone)]
#[tool]
struct MyStructTool {
config: String,
}
impl MyStructTool {
pub async fn execute(&self) -> Result<String, ToolError> {
// Implementation
Ok(format!("Processed: {}", self.config))
}
}§Complex Usage Examples
§Synchronous Function Example
The macro supports both async and sync functions. Sync functions are useful for computational tools that don’t require I/O operations:
use riglr_core::ToolError;
/// Calculate compound interest for a given principal, rate, and time
///
/// This is a computational tool that doesn't require async operations,
/// so it's implemented as a synchronous function that runs efficiently
/// within the async Tool framework.
#[tool]
fn calculate_compound_interest(
/// Principal amount in dollars
principal: f64,
/// Annual interest rate as a decimal (e.g., 0.05 for 5%)
annual_rate: f64,
/// Time period in years
years: f64,
/// Number of times interest is compounded per year
compounds_per_year: u32,
) -> Result<f64, ToolError> {
if principal <= 0.0 {
return Err(ToolError::invalid_input_string("Principal must be positive"));
}
if annual_rate < 0.0 {
return Err(ToolError::invalid_input_string("Interest rate cannot be negative"));
}
if years < 0.0 {
return Err(ToolError::invalid_input_string("Time period cannot be negative"));
}
if compounds_per_year == 0 {
return Err(ToolError::invalid_input_string("Compounds per year must be at least 1"));
}
let rate_per_compound = annual_rate / compounds_per_year as f64;
let total_compounds = compounds_per_year as f64 * years;
let final_amount = principal * (1.0 + rate_per_compound).powf(total_compounds);
Ok(final_amount)
}§Important Note on CPU-Intensive Sync Functions
The #[tool] macro executes synchronous functions directly within the async executor’s thread.
This is fine for quick computations, but CPU-intensive operations can block the async runtime.
For CPU-intensive work, wrap your function in tokio::task::spawn_blocking before applying
the #[tool] macro:
use riglr_core::ToolError;
/// CPU-intensive cryptographic operation
///
/// This uses spawn_blocking to avoid blocking the async runtime
#[tool]
async fn compute_hash(
/// Data to hash
data: Vec<u8>,
/// Number of iterations
iterations: u32,
) -> Result<String, ToolError> {
// Move CPU-intensive work to a blocking thread pool
tokio::task::spawn_blocking(move || {
// Simulate expensive computation
let mut hash = data;
for _ in 0..iterations {
hash = sha256::digest(&hash).into_bytes();
}
Ok(hex::encode(hash))
})
.await
.map_err(|e| ToolError::permanent_string(format!("Task failed: {}", e)))?
}Guidelines for choosing between sync and async with spawn_blocking:
- Use sync functions for quick calculations (< 1ms), simple data transformations, or validation
- Use async + spawn_blocking for CPU-intensive work like cryptography, complex parsing, or heavy computation
- Use regular async for I/O operations like network requests or database queries
§Generic Parameters and Type Constraints
use serde::{Serialize, Deserialize};
use schemars::JsonSchema;
/// Generic tool that can process any serializable data
#[tool]
async fn process_data<T>(
/// The data to process (must be JSON-serializable)
data: T,
/// Processing options
options: ProcessingOptions,
) -> Result<ProcessedData, ProcessingError>
where
T: Serialize + Deserialize + JsonSchema + Send + Sync,
{
// The macro handles generic constraints properly
let serialized = serde_json::to_string(&data)?;
// ... processing logic
Ok(ProcessedData::new(serialized))
}§SignerContext Integration
Tools automatically have access to the current blockchain signer:
use riglr_core::signer::SignerContext;
/// Swap tokens on Solana using Jupiter aggregator
///
/// This tool automatically accesses the current signer from the context,
/// eliminating the need to pass signing credentials explicitly.
#[tool]
async fn jupiter_swap(
/// Input token mint address
input_mint: String,
/// Output token mint address
output_mint: String,
/// Amount to swap in base units
amount: u64,
/// Maximum slippage in basis points
max_slippage_bps: u16,
) -> Result<String, SwapError> {
// Access the current signer automatically
let signer = SignerContext::current().await?;
// Derive RPC client from signer
let rpc_client = signer.rpc_client();
// Get quote from Jupiter
let quote = get_jupiter_quote(&input_mint, &output_mint, amount, max_slippage_bps).await?;
// Build and sign transaction
let tx = build_swap_transaction(quote, &signer.pubkey()).await?;
let signed_tx = signer.sign_transaction(tx).await?;
// Send transaction
let signature = rpc_client.send_and_confirm_transaction(&signed_tx).await?;
Ok(signature.to_string())
}§Multi-Chain Tool with Dynamic Signer Selection
use riglr_core::signer::{SignerContext, ChainType};
/// Bridge tokens between different blockchains
///
/// Automatically detects the source chain from the current signer
/// and handles cross-chain bridging operations.
#[tool]
async fn bridge_tokens(
/// Source token address
source_token: String,
/// Destination chain identifier
dest_chain: String,
/// Destination token address
dest_token: String,
/// Amount to bridge in base units
amount: u64,
/// Recipient address on destination chain
recipient: String,
) -> Result<BridgeResult, BridgeError> {
let signer = SignerContext::current().await?;
// Dynamic chain detection
let bridge_operation = match signer.chain_type() {
ChainType::Solana => {
SolanaBridge::new(signer).bridge_to_evm(
source_token, dest_chain, dest_token, amount, recipient
).await?
},
ChainType::Ethereum => {
EthereumBridge::new(signer).bridge_to_solana(
source_token, dest_token, amount, recipient
).await?
},
ChainType::Polygon => {
PolygonBridge::new(signer).bridge_cross_chain(
source_token, dest_chain, dest_token, amount, recipient
).await?
},
_ => return Err(BridgeError::UnsupportedChain),
};
Ok(bridge_operation)
}§Error Handling and Retry Logic
The macro automatically integrates with riglr’s structured error handling.
IMPORTANT REQUIREMENT: The #[tool] macro requires that all error types implement Into<ToolError>.
There is no automatic conversion for standard library error types like std::io::Error or reqwest::Error.
You must define custom error types that provide proper classification and context.
§Recommended Pattern: Custom Error Types with #[derive(IntoToolError)]
The required practice is to use the IntoToolError derive macro for automatic error handling:
use riglr_macros::IntoToolError;
use thiserror::Error;
#[derive(Error, Debug, IntoToolError)]
enum SwapError {
#[error("Insufficient balance: need {required}, have {available}")]
InsufficientBalance { required: u64, available: u64 },
#[error("Network congestion, retry in {retry_after_seconds}s")]
#[tool_error(retriable)] // Override default classification
NetworkCongestion { retry_after_seconds: u64 },
#[error("Slippage too high: expected {expected}%, got {actual}%")]
SlippageTooHigh { expected: f64, actual: f64 },
#[error("Invalid token mint: {mint}")]
InvalidToken { mint: String },
}
// The IntoToolError derive macro automatically generates the From<SwapError> for ToolError implSee the trybuild tests in riglr-macros/tests/trybuild/ for examples:
pass/custom_error_into.rs- Correct usage with custom error typesfail/unconvertible_error.rs- What happens when error types don’t implement Into
§Alternative: Manual Implementation
If you need more control, you can manually implement the conversion:
use riglr_core::ToolError;
impl From<SwapError> for ToolError {
fn from(error: SwapError) -> Self {
match error {
SwapError::NetworkCongestion { .. } => ToolError::Retriable(error.to_string()),
SwapError::InsufficientBalance { .. } => ToolError::Permanent(error.to_string()),
SwapError::SlippageTooHigh { .. } => ToolError::Permanent(error.to_string()),
SwapError::InvalidToken { .. } => ToolError::Permanent(error.to_string()),
}
}
}
/// Advanced token swap with detailed error handling
#[tool]
async fn advanced_swap(
input_mint: String,
output_mint: String,
amount: u64,
) -> Result<SwapResult, SwapError> {
let signer = SignerContext::current().await?;
// Check balance first
let balance = get_token_balance(&signer, &input_mint).await?;
if balance < amount {
return Err(SwapError::InsufficientBalance {
required: amount,
available: balance,
});
}
// Attempt swap with retries for transient failures
match attempt_swap(&signer, &input_mint, &output_mint, amount).await {
Err(SwapError::NetworkCongestion { .. }) => {
// The macro will automatically mark this as retriable
Err(SwapError::NetworkCongestion { retry_after_seconds: 10 })
},
result => result,
}
}§Testing Tool Implementations
The macro-generated code is designed to be easily testable:
#[cfg(test)]
mod tests {
use super::*;
use riglr_core::signer::{MockSigner, SignerContext};
use serde_json::json;
#[tokio::test]
async fn test_swap_tool_execution() {
// Create mock signer with expected behavior
let mock_signer = MockSigner::new()
.with_token_balance("EPjFWdd5AufqSSqeM2qN1xzybapC8G4wEGGkZwyTDt1v", 1000000) // USDC
.expect_transaction("swap")
.returns_signature("5j7s2Hz2UnknownTxHash");
// Test the generated tool
let tool = SwapTokensTool::new();
let result = SignerContext::new(&mock_signer).execute(async {
tool.execute(json!({
"fromMint": "EPjFWdd5AufqSSqeM2qN1xzybapC8G4wEGGkZwyTDt1v",
"toMint": "So11111111111111111111111111111111111111112",
"amount": 1000000,
"slippageBps": 50
})).await
}).await;
assert!(result.is_ok());
mock_signer.verify_all_expectations();
}
}§Best Practices
§1. Parameter Design
- Use descriptive parameter names that clearly indicate their purpose
- Provide comprehensive doc comments for each parameter
- Use appropriate default values with
#[serde(default)]where applicable - Group related parameters into structs for complex operations
§2. Error Handling
- Define custom error types that implement
Into<ToolError> - Use structured errors that provide actionable information
- Distinguish between retriable and permanent errors appropriately
- Include relevant context in error messages
§3. Documentation
- Write clear, concise function descriptions that explain the tool’s purpose
- Document any side effects or state changes
- Include examples in doc comments where helpful
- Explain any complex parameters or return values
§4. Performance Considerations
- Use
Arc<dyn Tool>for tools that will be shared across threads - Implement
Cloneefficiently for struct-based tools - Consider caching for expensive operations that don’t change frequently
- Use appropriate timeouts for network operations
§5. Security and Business Logic Validation
⚠️ IMPORTANT SECURITY NOTE: While the #[tool] macro and serde automatically handle parameter format validation (JSON schema, type conversion, required fields), your tool implementation is still responsible for all business logic validation and security checks.
§Critical Business Logic Validations:
Financial Operations:
#[tool]
async fn transfer_tokens(
to_address: String,
amount: f64,
slippage_percent: f64,
) -> Result<String, ToolError> {
// ✅ Business logic validation (your responsibility)
if amount <= 0.0 {
return Err(ToolError::invalid_input_string(
"Transfer amount must be positive"
));
}
if slippage_percent >= 5.0 {
return Err(ToolError::invalid_input_string(
"Slippage tolerance too high (max 5%). Consider if this is intentional"
));
}
// ✅ Address validation
if !is_valid_address(&to_address) {
return Err(ToolError::invalid_input_string(
"Invalid recipient address format"
));
}
// ✅ Balance check before executing
let balance = get_current_balance().await?;
if balance < amount {
return Err(ToolError::permanent_string(
format!("Insufficient balance: {} < {}", balance, amount)
));
}
// Proceed with transfer...
}Smart Contract Interactions:
#[tool]
async fn execute_contract_call(
contract_address: String,
function_name: String,
gas_limit: u64,
) -> Result<String, ToolError> {
// ✅ Contract address validation
if !is_trusted_contract(&contract_address) {
return Err(ToolError::permanent_string(
"Contract not in approved whitelist"
));
}
// ✅ Re-entrancy protection
if is_contract_execution_in_progress(&contract_address) {
return Err(ToolError::retriable_string(
"Contract execution already in progress, avoiding re-entrancy"
));
}
// ✅ Gas limit safety check
if gas_limit > MAX_SAFE_GAS_LIMIT {
return Err(ToolError::invalid_input_string(
"Gas limit exceeds safety threshold"
));
}
// Proceed with contract call...
}Data Integrity Checks:
#[tool]
async fn process_transaction_data(
tx_hash: String,
expected_amount: f64,
) -> Result<TransactionResult, ToolError> {
// ✅ Transaction hash format validation
if tx_hash.len() != 64 || !tx_hash.chars().all(|c| c.is_ascii_hexdigit()) {
return Err(ToolError::invalid_input_string(
"Invalid transaction hash format"
));
}
// ✅ Cross-reference with external data
let actual_amount = fetch_transaction_amount(&tx_hash).await?;
if (actual_amount - expected_amount).abs() > 0.001 {
return Err(ToolError::permanent_string(
"Transaction amount mismatch detected"
));
}
// Proceed with processing...
}§Remember: The Macro Handles Format, You Handle Business Logic
- Macro + Serde: Validates JSON structure, types, required fields
- Your Code: Validates ranges, business rules, security constraints, data relationships
§Macro Limitations
§Current Limitations
- Generic Functions: Limited support for complex generic constraints
- Lifetime Parameters: Not currently supported in tool functions
- Associated Types: Cannot use associated types in parameters
- Const Generics: No support for const generic parameters
§Workarounds
For complex generic scenarios, consider using trait objects or type erasure:
// Instead of:
// #[tool]
// async fn complex_generic<T: ComplexTrait>(data: T) -> Result<(), Error> { ... }
// Use:
#[tool]
async fn process_complex_data(
/// JSON representation of the data to process
data: serde_json::Value,
) -> Result<ProcessedResult, ProcessError> {
// Deserialize to specific types inside the function
let typed_data: MyType = serde_json::from_value(data)?;
// ... process typed_data
}§Integration with External Crates
The macro is designed to work seamlessly with the broader Rust ecosystem:
§Serde Integration
- Automatic
#[serde(rename_all = "camelCase")]for JavaScript compatibility - Support for all serde attributes on parameters
- Custom serialization/deserialization via serde derives
§JSON Schema Generation
- Automatic schema generation via
schemarscrate - Support for complex nested types and enums
- Custom schema attributes for fine-tuned control
§Async Runtime Compatibility
- Works with any async runtime (tokio, async-std, etc.)
- Proper handling of async trait implementations
- Support for async error handling patterns
The #[tool] macro transforms riglr from a collection of utilities into a cohesive,
developer-friendly framework for building sophisticated blockchain AI agents.
Attribute Macros§
- tool
- The
#[tool]procedural macro that converts functions and structs into Tool implementations.
Derive Macros§
- Into
Tool Error - Derives automatic conversion from an error enum to ToolError.