Brichka
A lightweight CLI for running code on Databricks clusters with notebook-like execution contexts and Unity Catalog autocomplete.
Features
- Execute code on Databricks interactive clusters (SQL, Scala, Python, R)
- Shared contexts - run multiple commands that share state, like notebook cells
- Unity Catalog LSP - autocomplete for catalogs, schemas, and tables in any editor
- JSONL output - structured results you can pipe to other tools
Works standalone or with the Neovim plugin.
Installation
Prerequisites:
- (Optional) Authenticated databricks-cli
Nix
Try it out:
Add to your flake inputs:
{
inputs = {
brichka = {
url = "github:nikolaiser/brichka";
inputs.nixpkgs.follows = "nixpkgs";
};
};
outputs = { self, nixpkgs, brichka, ... }: {
nixosConfigurations.myhost = nixpkgs.lib.nixosSystem {
modules = [{
...
environment.systemPackages = [ inputs.brichka.packages."${system}".brichka ];
...
}];
};
};
}
Cargo
Homebrew
Usage
)
Quick Start
- Select a cluster (uses fzf to choose from available clusters):
# For current directory only
# Or globally
- Run code (inline or from stdin):
# Inline
# From file/stdin
|
Results are returned as JSONL in a temporary file:
View results with any tool that reads JSONL (e.g., visidata, jq, etc.)
Databricks Authentication
By default brichka uses Databricks Cli for authentication. Alternatively a personal access token can be used to avoid this dependency. For instructions how to configure it run
Shared Execution Contexts (Notebook Mode)
Create a shared context where commands can reference each other's output, like notebook cells:
Now all commands in this directory share state:
-- First command
create or replace temporary view _data as select * from catalog.schema.table
// Second scala command can access _data
display(spark.table("_data"))
Unity Catalog Language Server
Get autocomplete for catalog/schema/table names in any editor.
Nvim
Add ~/.config/nvim/lsp/brichka.lua:
---@type vim.lsp.Config
return
Then enable in your config:
vim..
Working with Scala
For Metals (Scala LSP) support, use .sc files with this template:
// Adjust to your target scala version
//> using scala 2.13
// Adjust to your target spark version
//> using dep org.apache.spark::spark-sql::3.5.7
// brichka: exclude
val spark: SparkSession = ???
def display(df: DataFrame): Unit = ()
// brichka: include
// Your code here
The // brichka: exclude comments let you add dummy values for Databricks objects (like spark) that Metals needs but shouldn't be sent to the cluster.
For multiple files in notebook mode, subsequent files should reference the first:
//> using file fst.sc
// brichka: exclude
// brichka: include