# Deployer: documentation for version `2.3.X`
Press `q` to exit.
The actual documentation is available with `depl docs`.
Deployer works with YAML file format configurations at `.depl/config.yaml` path.
## Description of working principles
Deployer is, at its core, a local CI/CD. In other words, a `bash` command manager.
Typically, it runs in a separate folder to save the cache while keeping the code folder clean. However, you can specify either any folder or the code folder; if you already have caches, you can copy them from the source folder, symlink to them, or ignore them completely and run from scratch.
## CLI Utility Description
### Commands
Deployer is primarily a CLI utility. You can see help for any Deployer command by specifying the `-h` option. Here are some examples of the most common commands:
```bash
depl new action # create an action and put in Registry
depl new pipeline # create a pipeline and put in Registry
depl new content # add some content in Deployer's storage folder
depl new remote # add new remote host to Registry
depl ls actions/pipelines/content/remote # list all entities from Registries
depl cat action/pipeline/content/remote # display information about entiry from Registry
depl rm action/pipeline/content/remote # remove entity from Registry
depl init # init project, fill all attributes
depl edit project # edit project
depl edit . # (the same)
depl with # check compatibility and assign pipeline to project,
# also specify needed variables and artifacts
depl use # use some content from Deployer's storage
depl run # run default pipeline
depl # (the same)
depl check my-pipe # check if given pipeline exists, there are used actions and variables are accessible
depl run my-pipe # run specified `my-pipe` pipeline
depl run configure,build -o build-folder # run `configure` and `build` pipelines in a `build-folder`
depl run -R my-remote my-pipe # run `my-pipe` pipeline on remote host `my-remote`
depl watch my-pipe # watch `my-pipe` pipeline and re-run on file changes
```
### Environment variables
#### 1. `DEPLOYER_SH_PATH`
`DEPLOYER_SH_PATH` specifies shell executable which runs all your custom commands (see below).
For example, if you have `go build` command, Deployer will run it with `$DEPLOYER_SH_PATH -c "go build"` (with `deployer` pipeline driver). It does so because shell can contain some aliases or environment variables needed for running your pipeline.
#### 2. `DEPLOYER_STORAGE_PATH`
`DEPLOYER_STORAGE_PATH` specifies a path for your Deployer's content storage.
#### 3. `DEPLOYER_VAULT_ADDR` & `DEPLOYER_VAULT_TOKEN`
If you have HashiCorp Vault KV2 accessible, specify these two variables for using their variables in your project.
#### 4. `SMPTCHVERBOSE`
If you need to debug your patches, use `SMPTCHVERBOSE=1` variable before running `depl`.
### Console Interface (TUI)
Deployer has support for a high-end terminal-based customizer, allowing you to forget about manually writing actions and pipelines for your projects. Just try to create an action or pipeline and Deployer will ask you about everything.
### Logs
In the Deployer build caches folder, there is a `logs` folder that contains project log files with the date and time of the build.
Deployer shows log file path each pipeline run. Log file contains output of all actions except `Observe`.
## Description of the main entities
### 1. `Action`
Action is the main entity of Deployer. Actions as part of pipelines are used to build, install, and deploy processes. However, an action itself cannot be assigned to a project, that's what pipelines are for (see below).
In the Deployer's action Registry or project's actions list, an action looks like a construction:
```yaml
info: upx@0.1.0
requirements:
- type: exists_any
paths:
- /bin/upx
- /usr/bin/upx
- ~/.local/bin/upx
action:
type: staged
stage: build
cmd: upx %af%
placeholders:
- "%af%"
```
For each action within a pipeline, a list of `requirements` can be assigned. These will be checked *before pipeline run*, and if at least one requirement is not met, the pipeline will not be executed. A requirement can be set in three ways:
```yaml
requirements:
# if one of these paths will be found, the requirement will be considered satisfied
- type: exists_any
paths:
- /etc/my-app/config
- /etc/my-app/config.bak
- ~/.config/my-app/config
desc: "This is optional description with information about UPX installation steps."
# if this path exists, the requirement is considered satisfied
- type: exists
path: ./some-file
desc: "Some description"
# if some executable is available through `$ which` command
- type: in_path
executable: mold
desc: "Some installation steps"
# if this check is passed, the requirement will be considered satisfied (for details, see below - action `Check`)
- type: check_success
command:
cmd: /usr/bin/python -V
success_when_found: "Python 3."
desc: "Some description"
# if a given remote host exists in the Registry, is accessible, and its Deployer version is identical to the version of the running Deployer,
# the requirement will be considered satisfied
- type: remote_accessible_and_ready
remote_host_name: domain-01
```
There are 7 action categories:
1. Staged and custom actions (`staged` and `custom`)
2. Test action (`test`)
3. Observe action (`observe`)
4. Action of adding content to the Deployer's storage `add_to_storage` and using this content `use_from_storage`
5. The action of applying a `patch`
6. Actions of synchronization build folders - from current to remote host `sync_to_remote` and vice versa `sync_from_remote`
7. `interrupt` (when a user needs to perform some actions by hand before continue a pipeline)
The concept of a custom command, a command for the terminal shell, is fundamental. The `custom`, `observe`, and the three main categories of actions contain one or more custom commands inside.
#### 1.1. Custom command
The command description for Deployer is as follows:
```yaml
# You can leave default/unused fields empty.
command:
cmd: upx %af%
placeholders: ["%af%"] # empty by default
env: [] # by default
ignore_fails: false # by default
show_success_output: false # by default
show_cmd: true # by default
only_when_fresh: false # by default
remote_exec: [] # by default
daemon: false # by default
daemon_wait_seconds: 0 # field is unused when `daemon` is false
```
- `cmd` contains the text of the command to be executed in the terminal
- `placeholders` contains a list of `placeholders` that can be replaced with project variables and artifacts to perform necessary actions with them
- `env` contains a list of required environment variable names (e.g., `SOME_SECRET`, `DEBUG`, etc.) that will be set up on command run
- `ignore_fails` tells Deployer whether to qualify a process output status not equal to zero as normal command behavior or not; if not, Deployer will abort Pipeline execution and exit with status `1`
- `show_success_output` tells Deployer whether to print the command output always (including when the process exit status is `0`), or whether to print only on error
- `show_cmd` tells the Deployer whether to print the full command text on the screen; this can be useful when the command contains vulnerable variables
- `only_when_fresh` tells Deployer that this action should only be performed on a fresh build (either on the first build, or when explicitly instructed to rebuild from scratch with the `-f` option)
- `remote_exec` contains a list of short hostnames on which this command will need to be executed
- `daemon` tells Deployer whether to run this command as a daemon process or not
- `daemon_wait_seconds` tells Deployer how many seconds to wait for the daemon process to start before moving to the next command
#### 1.2. Staged and custom actions
You can't just use custom commands inside pipelines, so you can just wrap each your command into custom action:
```yaml
info: ls@0.1.0
action:
type: custom
cmd: ls
show_success_output: true
```
But sometimes you need to be more specific. For example, containered runs are dividing by build stage (`docker build`) and run stage (`docker up`), and to prevent executing your services on build stages, you should specify stage `stage: build/deploy`:
```yaml
info: cargo-release@0.1.0
tags:
- rust
- cargo
requirements:
- type: exists_any
paths:
- /bin/cargo
- ~/.cargo/bin/cargo
action:
type: staged
stage: build
cmd: cargo build --release
```
#### 1.3. The actions of adding `add_to_storage` content, using `use_from_storage` content, and applying a `patch`
Often projects can be sufficiently templated that the same files are copied between projects, but not modified and only required during build or deployment. Such files can be located in a special folder with relative paths preserved and added to the Deployer repository:
```bash
depl new content
```
Then a new action - `use_from_storage` - can be added to project pipelines those need to use these files:
```yaml
info: sync-fake-docker-files@0.1.0
action:
type: use_from_storage
content_info: docker-fake-files@latest
```
This will eventually add the content you need to the run folder when the pipeline is executing.
Additionally, you can also specify the `subfolder` field so that the Deployer adds content to a certain subfolder (useful for monorepositories).
Time after time, you will start to notice that some projects are overused in other projects as dependencies and need to be published somewhere. Package repositories are the best place for this, but if you don't want to publish your project, you can add it to the Deployer repository as content. Moreover, you can add it automatically using the `add_to_storage` action:
```yaml
info: add-my-app-to-content@0.1.0
action:
type: add_to_storage
short_name: my-app
auto_version_rule:
type: plain_file
path: file-with-current-version.txt
```
- `short_name` - string designation of the content, which will be used to place it in the storage and each time it is used
- `auto_version_rule` - a way to automatically determine the version of the content (either `plain_file` with `path` field - a file that will contain only the version and nothing else, or `cmd_stdout` with `cmd` field - a command that will display only the version and nothing else)
- `content` - you may not specify it in case when you need to push all your artifacts in the storage, or you can write this way:
```yaml
info: add-my-app-to-content@0.1.0
action:
type: add_to_storage
short_name: my-app
auto_version_rule:
type: plain_file
path: file-with-current-version.txt
content:
type: fixed_files
placements:
- from: target/release/my-app
to: my-app
```
In this example you may see that you can specify only needed to you files from Deployer's run directory (relative `from` path) to push at content's folder (relative `to` path).
However, sometimes the file needs to be edited in some way - and not so much even the added content from the Deployer repository, but, for example, various files in the build dependencies, e.g. manually forking Python libraries to add the desired functionality, etc., etc. And, as a rule, you want to do it without creating forks and synchronizing changes with the main repository! You can't do it with `git` patches alone.
For this purpose, Deployer uses the [`smart-patcher`](https://github.com/impulse-sw/smart-patcher) library for patches. Such patches allow source code, complex documents, and even binary files to be modified, allowing you to search for necessary inclusions in the content based on sifting rules and even using scripts in languages such as Python, Lua, and Rhai. For example, the `smart-patcher` repository has an example with a patch for a Microsoft Word document - and many more examples.
To use smart patches you need to write a patch file first. Example:
```json
{
"patches": [
{
"files": [{"just": "test_v5.docx"}],
"decoder": {"sh": "tests/test_v5.py"},
"encoder": {"sh": "tests/test_v5.py"},
"replace": {"from_to": ["game", "rock"]}
}
]
}
```
The action of a patch looks like this:
```yaml
info: apply-test-patch@0.1.0
action:
type: patch
patch: path/to/my_patch.json
```
The patch *should be located in the run folder* when you run pipeline. A very good practice is to write patches and place them as content in the Deployer repository. Then both the patch file and the scripts will be located side by side and will be added during the build process.
When a patch is applied, Deployer displays the number of times it has been applied in the project. If the patch has not been applied once during the pipeline process, *Deployer will generate an error*.
To debug patch, you can use `SMPTCHVERBOSE=1` env variable before running Deployer. Also you can use `depl check my-pipe` with this variable to see a possible diff before actually patching.
#### 1.4. Actions of synchronization run folders - from current to remote host `sync_to_remote` and vice versa `sync_from_remote`
Sometimes you need to synchronize build files between remote hosts and the current host. For example, when some actions must be performed on one host, and some on another. To do this, you can use the built-in actions `sync_to_remote` and `sync_from_remote`.
#### 1.5. Other actions - `interrupt`, `observe` and `test`
> [!NOTE]
> Don't have the configuration example you need? Create the action yourself using the `deployer new action` command and display it using the `deployer cat action my-action@x.y.z`.
`interrupt` is used to manually interrupt the build/deployment of a project. When Deployer reaches this action, it waits for user input to continue when you perform the necessary manual actions.
`observe` is an action that is almost identical to `custom`. It is used, for example, to start Prometheus, Jaeger or anything else. The distinctive feature is that it runs without I/O redirection, i.e. you can interact with programs in it.
And `test` is a special action that allows you to check what the command outputs to `stdout/stderr`:
```yaml
type: test
cmd: target/release/%af% -V
success_when_found: %af% v1.0.0
```
- `success_when_found` tells Deployer that if it finds the specified regular expression, the execution of the command will be considered successful
- `success_when_not_found` tells Deployer that if it does not find the specified regular expression, the command execution will be considered successful.
Moreover, if both fields are specified, the execution will be considered successful if both options were successful (the first regular expression must find, the second must not find).
This concludes the description of actions, and we move on to pipelines.
### 2. `Pipeline`
A pipeline is an ordered array of actions that is necessary to achieve a certain goal. For example, when you need to check code quality, check the code with a static analyzer, then build, compress, package it for a certain distribution and upload it to hosting. Or when you need to build an Android application, sign it and install it on an ADB-connected device. The composition of pipeline can be any, the main example is given in the `.depl/config.yaml` file of this repository:
```yaml
title: install
info: deployer-install@0.1.0
default: true # default is false
driver: deployer # deployer | shell
artifacts:
- from: target/release/depl
to: depl
actions:
- title: Lint
used: cargo-lint@0.1.0
- title: Format
used: cargo-fmt@0.1.0
- title: Build in release
used: cargo-release@0.1.0
- title: Compress
used: upx@0.1.0
with:
"%af%": build-artifact
- title: Install
used: cargo-install-by-copy@0.1.0
with:
"%bin%": build-artifact
```
In general, a pipeline contains a list of used actions in the `actions` field. Each action contains `used` info that is correlated to used action, and when you need to set some environment or command variables, you can specify them onto `with` list.
In Deployer v2.X a driver option became possible to specify. `driver: deployer` is internal Deployer's driver, while `driver: shell` is equal to exporting the pipeline as shell (`*.sh`) script and executing it.
In addition, if your pipelines need to manage conflicting cache versions (for example, when building a project for different target architectures), you can specify an exclusive build tag in the `exclusive_exec_tag` field. For example, specify `x86_64` when adding a pipeline build for one architecture and `aarch64` for another. Then pipelines will be built in different folders and cache information will be saved in both cases.
#### 2.1. Containerized assembly and execution, as well as strategies
Since Deployer can execute any commands, it can also automate deployment in containers and clusters using Docker and Kubernetes-like platforms. But most interestingly, Deployer provides automation for building and running your pipelines in Docker and Podman containers with automatic `Dockerfile` generation. Artifacts will also be automatically extracted and placed in the project folder. Containerized building can be useful in cases where building for other platforms or in a different environment is required.
In this regard, Deployer allows additional functions (see below).
Building and execution occur as follows:
1. (Only with `deployer` driver) an image is formed to build Deployer for the desired platform (for executing pipelines based on the image).
2. (Only with `deployer` driver) Deployer is built.
3. An image is formed to build the project for the desired platform - with necessary dependencies and other commands that can be specified independently.
4. If build caching strategies are specified, Deployer performs the build and saves caches.
5. If user specified, Deployer will create and switch to it.
6. Deployer on the host machine runs Deployer in the container and performs complete pipeline execution.
> [!NOTE]
> When building in containers, Deployer does not support actions `interrupt`, `observe`, `add_to_storage` and `use_from_storage`, and when running - actions `add_to_storage` and `use_from_storage`.
> [!NOTE]
> To prevent building Deployer before your project, switch to `shell` pipeline driver.
Let's look at an example of a containerized pipeline:
```yaml
title: ci-docker
info: deployer-ci-docker@0.1.0
tags:
- ci
- docker
containered_opts:
with:
- "{MY_API_KEY}": my-api-key-variable
preflight_cmds:
- RUN apt-get update && apt-get install -y build-essential curl git && rm -rf /var/lib/apt/lists/*
- RUN curl https://sh.rustup.rs -sSf | bash -s -- -y --profile minimal --default-toolchain nightly
- ENV PATH="/root/.cargo/bin:${PATH}"
- ENV API_KEY={MY_API_KEY}
- RUN rustup component add clippy
cache_strategies:
- fake_content: docker-fake-files@0.1.0
copy_cmds:
- COPY rust-toolchain.toml .
- COPY .docker-fake-files/rust/lib.rs src/lib.rs
- COPY .docker-fake-files/rust/main.rs src/main.rs
- COPY Cargo.toml .
pre_cache_cmds:
- DEPL
- copy_cmds:
- COPY src/ src/
- COPY DOCS.md .
- COPY MIGRATIONS.md .
pre_cache_cmds:
- RUN touch src/main.rs
- RUN touch src/lib.rs
- DEPL
use_containerd_local_storage_cache: true
prevent_metadata_loading: true
exclusive_exec_tag: ci-docker
artifacts:
- from: target/release/depl
to: depl
driver: shell
actions:
- title: Lint
used: cargo-lint@0.1.0
- title: Build in release
used: cargo-release@0.1.0
```
The only difference is adding the `containered_opts` field, which automatically makes Deployer execute this pipeline in a containerized environment.
- `base_image` - you can specify the base image for building the project (default is `ubuntu:latest`)
- `with` - map of variables (like in pipeline's actions) that will be replaced in `preflight_cmds`, `deployer_build_cmds` and `cache_strategies`
- `preflight_cmds` - list of commands for proper environment setup
- `build_deployer_base_image` and `deployer_build_cmds` - base image and commands for setup & build Deployer itself (while using `driver: deployer`) (you can see defaults below)
- `cache_strategies` - caching strategies during build
- `run_detached` - run the Deployer pipeline in detached mode
- `port_bindings` (`[{ "from": 8080, "to": 80 }]`) - add port bindings to allow containered runs to communicate with exposed environment
- `allow_internal_host_bind` - allow usage of `host.docker.internal`
- `use_containerd_local_storage_cache` - when building in Docker with enabled `containerd` feature in `/etc/docker/daemon.json`, allows saving image cache in the pipeline execution folder, which simplifies cache cleanup
- `prevent_metadata_loading` - prevents reconnecting to registries and searching for a new image when an old one is available (allows building and running Pipelines in containers without Internet access)
- `executor` - allows specifying the build and run executor (default is Docker, specify `podman` to use Podman)
- `user` - creates user to build and run pipeline
Since Deployer is needed in a containerized environment, check Deployer and environment compatibility by running the pipeline. If Deployer is built with Python support, it's best to use identical base images.
To preserve build caches rather than constantly rebuilding the pipeline from scratch, it's recommended to specify caching strategies. They are executed when building the containerized environment. Available fields:
- `fake_content` - field for content synchronization to substitute existing files (works like `use_from_storage` but doesn't support `latest` tags)
- `copy_cmds` - commands for copying source code into the image
- `pre_cache_cmds` - commands for preliminary caching
If you need to execute a pipeline as a preliminary caching command, specify the `"DEPL"` command. The containerized pipeline configuration will be added to the container automatically.
Caching strategies are suitable for implementing multi-stage builds. In the example above, a two-stage build for a Rust project occurs, which first requires copying the real `Cargo.toml` and fake `lib.rs` and `main.rs` to first compile all project dependencies, and then copies the real source code `src/` and updates timestamps `RUN touch src/main.rs & touch src/lib.rs` to then build the project without needing to rebuild dependencies. In this case, the dependencies cache will be used until `Cargo.toml` is edited.
To rebuild the environment from scratch, run Deployer with the `-f`/`--fresh` flag.
Default `build_deployer_base_image` is `ubuntu:latest`.
Default `deployer_build_cmds` are:
```dockerfile
RUN apt-get update && apt-get install -y build-essential curl git && rm -rf /var/lib/apt/lists/*
RUN curl https://sh.rustup.rs -sSf | bash -s -- -y --profile minimal --default-toolchain nightly
ENV PATH="/root/.cargo/bin:${PATH}"
RUN git clone --single-branch --branch stable https://github.com/impulse-sw/deployer.git && cd deployer && cargo build --release
```
#### 2.2. Running on remote hosts with Ansible
In case of using very large amount of remote hosts and for integration with system SSH keys, you can use [Ansible](https://docs.ansible.com/). In that way Deployer allows you to use existing inventories (or create them from scratch from Deployer remote hosts' list) for pipeline runs.
Building and execution occur as follows:
1. Either an Ansible inventory is created or copied.
2. A playbook is created for synchronizing source files and artifacts.
3. Ansible is launched with the specified inventory and created playbook.
> [!NOTE]
> Deployer must be present on all used hosts in the `{{ ansible_env.HOME }}/.cargo/bin/` folder, if using `deployer` driver.
As with containerized builds, you need to add an additional `ansible_opts` field to the pipeline:
```yaml
ansible_opts:
# either create inventory from hosts added to Deployer:
create_inventory:
- localhost-test
# or use defined inventory with host group:
use_inventory: ./my-inventory.ini
host_group: all
```
1. `create_inventory` - list of remote hosts that will be used when creating the Ansible inventory
2. `use_inventory` - an already existing inventory
3. `host_group` - if the inventory has multiple groups, you can specify which specific group to deploy the playbook on
#### 2.3. GitHub Actions and GitLab CI options
> [!NOTE]
> It's highly recommended to watch the source code (for now) to understand possible options.
>
> Sources: `src/github.rs`, `src/gitlab.rs`.
Prepared-to-CI pipeline example:
```yaml
title: ci
info: deployer-ci@0.1.0
tags:
- ci
- gitlab
- github
# GitHub Actions CI:
gh_opts:
on_push_branches:
- stable
- unstable
on_pull_requests_branches:
- unstable
preflight_steps:
- name: Install Rust Toolchain
uses: actions-rust-lang/setup-rust-toolchain@v1
with:
components: clippy
target: x86_64-unknown-linux-gnu
toolchain: nightly
# GitLab CI:
gl_opts:
rules:
- $CI_COMMIT_BRANCH == "unstable"
- $CI_COMMIT_BRANCH == "stable"
- $CI_PIPELINE_SOURCE == "merge_request_event"
base_image: rust
preflight_cmds:
- rustup install nightly
- rustup component add --toolchain nightly clippy
exclusive_exec_tag: ci
artifacts:
- from: target/release/depl
to: depl
driver: shell
actions:
- title: Lint
used: cargo-lint@0.1.0
- title: Build in release
used: cargo-release@0.1.0
```
### <a id="other-entities">3. Other entities</a>
One of the most important entities are variables. They are both the keepers of your secrets and the very dynamic entities that can change the outcome of the pipeline execution. Here is an example of a simple variable:
```yaml
variables:
build-artifact: # variable title
is_secret: false
value:
type: plain
value: target/release/my-app
```
- `is_secret` - whether the variable is a secret (if it is, the command that contains it will not be shown on the screen)
- `value` - the value of the variable itself or information about where and how to get this value from.
There are three types of variables supported now:
1. `plain` - the content of the string is the variable
2. `from_env_var` - the variable will be taken from Deployer's shell environment
3. `from_env_file` - the variable will be taken from the specified `env-file` with the specified key.
4. `from_hc_vault_kv2` - the variable will be taken from the HashiCorp Vault KV2 repository with the specified `mount_path` and `secret_path`
5. `from_cmd` - the variable will be taken from the output of a shell command
Examples:
```yaml
grafana-token:
is_secret: true
value:
type: from_env_file
env_file_path: .env
key: GRAFANA_TOKEN
```
```yaml
grafana-token:
is_secret: false
value:
type: from_env_var
var_name: variable-key
```
```yaml
secret:
is_secret: true
value:
type: from_hc_vault_kv2
mount_path: some/place # The mount path where your KV2 secrets engine is mounted
secret_path: some/path # Path to your secret
```
```yaml
retrieve:
is_secret: true
value:
type: from_cmd
# Note that `$SOME_ENV_KEY` should be really set by environment, not by Deployer.
cmd: curl -H "Authorization: Basic $SOME_ENV_KEY" https://secrets.example.com/retrieve-api-token
```
Note that you must specify two environment variables before using `from_hc_vault_kv2` variables: the `DEPLOYER_VAULT_ADDR` (Vault URL) and `DEPLOYER_VAULT_TOKEN` (Vault token).
Another important entity is the remote host. The deployer stores all hosts in the Registry (global configuration file - list `remote_hosts`). The host structure looks like this:
```yaml
localhost-test:
short_name: localhost-test
ip: 127.0.0.1
port: 22
username: username
ssh_private_key_file: ~/.ssh/id_ed25519
```
> [!NOTE]
> To be able to use a host, before adding it, you must create a key and allow authorization on the remote host using the key. If key is specified inside `~/.ssh/config`, you can skip `ssh_private_key_file` field.