Please check build logs and if you believe this is docs.rs' fault, report into this issue report.
Cage: Develop and deploy complex Docker applications
Does your project have too many Docker services? Too many git repos? Cage
makes it easy to develop complex, multi-service applications locally. It
works with standard
docker-compose.yml files and
it helps bring order to the complexity:
- Cage provides a standardized project structure, much like Rails did for web development.
- Cage allows you to work with multiple source repositories, and to mix pre-built Docker images with local source code.
- Cage removes the repetitive clutter from your
- Cage provides secret management, either using a single text file or Hashicorp's Vault.
For more information about Cage, see the introductory website.
$ docker-compose --version docker-compose version 1.8.1, build 878cff1
We provide pre-built
cage binaries for Linux and MacOS on the
release page. The Linux binaries
are statically linked and should work on any modern
Linux distribution. To install, you can just unzip the binaries and copy
unzip cage-*.zip sudo cp cage /usr/local/bin/ rm cage-*.zip cage
If you would like to install from source, we recommend using
curl https://sh.rustup.rs -sSf | sh cargo install cage
If you have trouble using cage's vault integration, try installing with
Note that it's possible to build
cage for Windows, but
it's still not yet officially supported.
Trying it out
Create a new application using
$ cage new myapp $ cd myapp
Pull the pre-built Docker images associated with this application and start up the database pod:
$ cage pull $ cage up db
rake image to initialize your database:
$ cage run rake db:create $ cage run rake db:migrate
And bring up the rest of the app:
$ cage up
Let's take a look at the pods and services defined by this application:
$ cage status db enabled type:placeholder └─ db frontend enabled type:service └─ web ports:3000 rake enabled type:task └─ rake
This shows us that the
web service is listening on port 3000, so you
should be able to access the application
at http://localhost:3000. But let's make a
change! First, list the available source code for the services in this
$ cage source ls rails_hello https://github.com/faradayio/rails_hello.git
Try mounting the source code for
rails_hello into all the containers that
$ cage source mount rails_hello $ cage up $ cage source ls rails_hello https://github.com/faradayio/rails_hello.git Cloned at src/rails_hello (mounted)
You may also notice that since
myapp_rake_1 is based on the same
underlying Git repository as
myapp_web_1, that it also has a mount of
src/rails_hello in the appropriate location. If you change the source on
your host system, it will automatically show up in both containers.
Now, create an HTML file at
<html> <head><title>Sample page</title></head> <body><h1>Sample page</h1></body> </html>
And reload the website in your browser. You should see the new page!
We can also run container-specific unit tests, which are specified by the container, so that you can invoke any unit test framework of your choice:
$ cage test web
And we can access individual containers using a configurable shell:
$ cage shell web root@21bbbb41ad4a:/usr/src/app#
The top-level convenience commands like
shell make it much
easier to perform standard development tasks without knowing how individual
For more information, check out
What's a pod?
A "pod" is a tightly-linked group of containers that are always deployed together. Kubernetes defines pods as:
A pod (as in a pod of whales or pea pod) is a group of one or more containers (such as Docker containers), the shared storage for those containers, and options about how to run the containers. Pods are always co-located and co-scheduled, and run in a shared context. A pod models an application-specific “logical host” - it contains one or more application containers which are relatively tightly coupled — in a pre-container world, they would have executed on the same physical or virtual machine.
If you're using Amazon's ECS, a pod corresponds to an ECS "task" or
"service". If you're using Docker Swarm, a pod corresponds to a single
docker-compose.xml file full of services that you always launch as a
Pods typically talk to other pods using ordinary DNS lookups or service discovery. If a pod accepts outside network connections, it will often do so via a load balancer.
examples/hello for a complete example.
hello └── pods ├── common.env ├── frontend.yml └── targets ├── development │ └── common.env ├── production │ ├── common.env │ └── frontend.yml └── test └── common.env
run-script command operates similarly to
npm run <script> or
rake <task>. Simply define a set of named scripts in the pod's metadata:
# tasks.yml services: runner: build: .
# tasks.metadata.yml services: runner: scripts: populate: - ["npm","run","populate"]
cage run-script populate, cage will find all services
that have a
populate script and run it. You can also specify a
pod or service with
cage run-script tasks populate.
If you encounter an issue, it might help to set the following shell variables and re-run the command:
export RUST_BACKTRACE=1 RUST_LOG=cage=debug,compose_yml=debug
Pull requests are welcome! If you're unsure about your idea, then please feel free to file an issue and ask us for feedback. We like suggestions!
Setting up tools
When working on this code, we recommend installing the following support tools:
cargo install rustfmt cargo install cargo-watch
We also recommend installing nightly Rust, which produces better error messages and supports extra warnings using Clippy:
rustup update nightly rustup override set nightly
nightly produces build errors, you may need to update your compiler
and libraries to the latest versions:
rustup update nightly cargo update
If that still doesn't work, try using
stable Rust instead:
rustup override set stable
If you're using
nightly, run the following in a terminal as you edit:
cargo watch "test --features clippy"
If you're using
stable, leave out
cargo watch test
Before committing your code, run:
This will automatically reformat your code according to the project's
conventions. We use Travis CI to verify that
cargo fmt has been run and
that the project builds with no warnings. If it fails, no worries—just go
ahead and fix your pull request, or ask us for help.
The openssl crate needs a compatible version of the openssl libraries. macOS ships with a version using an old API. The best solution is to install latest openssl with homebrew and link to it:
brew install openssl export OPENSSL_INCLUDE_DIR=$(brew --prefix openssl)/include export OPENSSL_LIB_DIR=$(brew --prefix openssl)/lib cargo clean # you need to do this if your macOS-versioned openssl build failed cargo build
To make an official release, you need to be a maintainer, and you need to
cargo publish permissions. If this is the case, first edit
Cargo.toml to bump the version number, then regenerate
Commit the release, using a commit message of the format:
v<VERSION>: <SUMMARY> <RELEASE NOTES>
cargo publish git tag v$VERSION git push; git push --tags
This will rebuild the official binaries using Travis CI, and upload a new version of the crate to crates.io.