tl-lang-0.4.3 has been yanked.
TL Programming Language
A tensor logic programming language with first-class tensor support, JIT-compiled to LLVM.
Features
- Tensor Operations:
tensor<f32>[128, 128], matmul, topk, etc. via Candle.
- Logic Programming: Datalog-style rules with tensor integration.
- Hybrid Execution: Logic terms can access tensor data (
data[i]).
- JIT Compilation: High-performance execution using LLVM (Inkwell).
- GPU Support: Metal (macOS) and CUDA (Linux) backends with automatic platform detection.
- Optimization: Aggressive JIT optimization and fast logic inference.
Installation
Install from crates.io:
cargo install tl-lang
This installs the tl command.
Prerequisites
macOS (Metal backend)
-
Install LLVM 18 and OpenSSL:
brew install llvm@18 openssl
-
Configure Environment Variables:
Add the following to your shell configuration (e.g., ~/.zshrc):
export LLVM_SYS_181_PREFIX=$(brew --prefix llvm@18)
Reload your shell (source ~/.zshrc) before running cargo commands.
Linux (CUDA backend)
-
Install LLVM 18:
wget https://apt.llvm.org/llvm.sh
chmod +x llvm.sh
sudo ./llvm.sh 18
sudo apt-get install libllvm18 llvm-18-dev
-
Install CUDA Toolkit (12.x recommended):
-
Configure Environment Variables:
export LLVM_SYS_181_PREFIX=/usr/lib/llvm-18
export CUDA_PATH=/usr/local/cuda
export PATH=$CUDA_PATH/bin:$PATH
export LD_LIBRARY_PATH=$CUDA_PATH/lib64:$LD_LIBRARY_PATH
Build & Run
cargo run -- examples/hybrid_test.tl
cargo run --release -- examples/readme_n_queens.tl
[!WARNING]
Metal users: Long-running loops may cause RSS (memory) growth due to Metal driver behavior, not TL memory leaks. For stable memory usage, use TL_DEVICE=cpu. See Metal RSS Growth Notes for details.
Syntax
TL's syntax is very similar to Rust, but without lifetimes.
Basic Syntax
fn main() {
let x = 5;
println("{}", x);
}
Tensor Operations
fn main() {
let x = [1.0, 2.0, 3.0];
let y = [4.0, 5.0, 6.0];
let z = x + y;
println("{}", z);
}
if Statement
fn main() {
let x = 5;
if x > 0 {
println("{}", x);
}
}
while Statement
fn main() {
let mut x = 5;
while x > 0 {
println("{}", x);
x = x - 1;
}
}
for Statement
fn main() {
let mut x = 5;
for i in 0..x {
println("{}", i);
}
}
Function Definition
fn main() {
let x = 5;
let y = add(x, 1);
println("{}", y);
}
fn add(x: i64, y: i64) -> i64 {
x + y
}
Tensor Comprehension
fn main() {
let t = [1.0, 2.0, 3.0, 4.0];
let res = [i | i <- 0..4, t[i] > 2.0 { t[i] * 2.0 }];
println("{}", res);
}
For more details, see Tensor Comprehension
VSCode Extension
A syntax highlighting extension is provided in vscode-tl.
Installation
- Open the
vscode-tl directory in VSCode.
- Press F5 to verify syntax highlighting in a new window.
- Or install manually:
cd vscode-tl
npm install -g vsce (if needed)
vsce package
code --install-extension tensor-logic-0.1.0.vsix
Code Example: N-Queens Problem (Solved via Tensor Optimization)
TensorLogic can solve logical constraints as continuous optimization problems using tensor operations.
Below is an example program that solves the N-Queens problem using gradient descent.
fn main() {
let N = 8; let solutions_to_find = 5; let mut found_count = 0;
println("Finding {} solutions for N-Queens...", solutions_to_find);
while found_count < solutions_to_find {
let lr = 0.5;
let epochs = 1500;
let mut board = Tensor::randn([N, N], true);
for i in 0..epochs {
let probs = board.softmax(1);
let col_sums = probs.sum(0);
let col_loss = (col_sums - 1.0).pow(2).sum();
let anti_diag_sums = [k | k <- 0..(2 * N - 1), r <- 0..N, c <- 0..N, r + c == k { probs[r, c] }];
let main_diag_sums = [k | k <- 0..(2 * N - 1), r <- 0..N, c <- 0..N, r - c + N - 1 == k { probs[r, c] }];
let anti_diag_loss = (anti_diag_sums - 1.0).relu().pow(2).sum();
let main_diag_loss = (main_diag_sums - 1.0).relu().pow(2).sum();
let total_loss = col_loss + anti_diag_loss + main_diag_loss;
if i % 100 == 0 {
let current_queens = [r, c | r <- 0..N, c <- 0..N {
if probs[r, c] > 0.5 { 1.0 } else { 0.0 }
}].sum().item();
if current_queens == 8.0 {
break;
}
}
total_loss.backward();
let g = board.grad();
board = board - g * lr;
board = board.detach();
board.enable_grad();
}
let probs = board.softmax(1);
let mut valid = true;
let mut total_queens = 0;
let mut rows = 0;
while rows < N {
let mut queen_count = 0;
let mut cols = 0;
while cols < N {
if probs[rows, cols] > 0.5 {
queen_count = queen_count + 1;
total_queens = total_queens + 1;
}
cols = cols + 1;
}
if queen_count != 1 {
valid = false;
}
rows = rows + 1;
}
if valid && total_queens == N {
found_count = found_count + 1;
println("Solution #{}", found_count);
let mut rows2 = 0;
while rows2 < N {
let mut cols2 = 0;
while cols2 < N {
if probs[rows2, cols2] > 0.5 {
print(" Q ");
} else {
print(" . ");
}
cols2 = cols2 + 1;
}
println("");
rows2 = rows2 + 1;
}
println("----------------");
}
}
}
Code Example: Neuro-Symbolic AI (Spatial Reasoning)
The following example demonstrates how to fuse Vision (Tensor) and Reasoning (Logic).
It detects spatial relationships from raw coordinates and infers hidden facts using logic rules (Transitivity).
object(1, cup).
object(2, box).
object(3, table).
stacked_on(top, bot) :- on_top_of(top, bot).
stacked_on(top, bot) :- on_top_of(top, mid), stacked_on(mid, bot).
fn main() {
println("--- Neuro-Symbolic AI Demo: Spatial Reasoning ---");
let cup_bbox = [10.0, 20.0, 4.0, 4.0]; let box_bbox = [10.0, 10.0, 10.0, 10.0]; let table_bbox = [10.0, 0.0, 50.0, 10.0];
println("\n[Visual Scene]");
println(" [Cup] (y=20)");
println(" | ");
println(" [Box] (y=10)");
println(" | ");
println("[=========] (Table y=0)");
println("");
if cup_bbox[1] > box_bbox[1] {
on_top_of(1, 2).
println("Detected: Cup is on_top_of Box");
}
if box_bbox[1] > table_bbox[1] {
on_top_of(2, 3).
println("Detected: Box is on_top_of Table");
}
println("\n[Logical Inference]");
println("Querying: Is Cup stacked on Table? (?stacked_on(1, 3))");
let res = ?stacked_on(1, 3);
if res.item() > 0.5 {
println("Result: YES (Confidence: {:.4})", res.item());
println("Reasoning: Cup -> Box -> Table (Transitivity)");
} else {
println("Result: NO");
}
}
Documentation
LICENSE
MIT
References
Development