Develop large, multi-pod, multi-repo docker-compose
apps
This is a work in progress using the
compose_yml
library. It's
a reimplementation of our internal, ad hoc tools using the new
docker-compose.yml
version 2 format and Rust.
What's this for?
- Does your app include more than one
docker-compose.yml
file? - Are your service implementations spread across multiple git repositories?
- Does your app contain a mixture of permanently running containers and one-shot tasks?
- Does your app run across more than one cluster of machines?
- Do individual components of your app need their own load balancers?
- When running in development mode, do you need to replace 3rd-party services with local containers?
If you answer to one or more of these questions is "yes", then cage
is
probably for you. It provides development and deployment tools for complex
docker-compose
apps, following a convention over configuration
philosophy.
Installation
To install, we recommend using rustup
and cargo
:
|
We also provide official binary releases for Mac OS X and for Linux. The Linux binaries are statically linked using musl-libc and rust-musl-builder, so they should work on any Linux distribution, including both regular distributions and stripped down distributions like Alpine. Just unzip the binaries and copy them to where you want them.
The Mac binaries are somewhat experimental because of issues with MacPorts
and OpenSSL. If they fail to work, please file a bug and try installing
with cargo
.
Trying it out
Create a new application using cage
, and list the associated Git
repositories:
Check out the source code for an image locally:
Start up your application:
You'll notice that the src/rails_hello
directory is mounted at
/usr/src/app
inside the myapp_web_1
pod, so that you can make changes
locally and test them.
Run a command inside the frontend
pod's web
container to create a
database:
We could also just specify the service name web
instead of the full
frontend/web
, as long as web
is unique across all pods.
We can also package up frequently-used commands in their own, standalone "task" pods, and run them on demand:
You should be able to access your application at http://localhost:3000/.
You may also notice that since myapp_migrate_1
is based on the same
underlying Git repository as myapp_web_1
, that it also has a mount of
src/rails_hello
in the appropriate location. If you change the source on
your host system, it will automatically show up in both containers.
We can run container-specific unit tests, which are specified by the container, so that you can invoke any unit test framework of your choice:
And we can access individual containers using a configurable shell:
The top-level convenience commands like test
and shell
make it much
easier to perform standard development tasks without knowing how individual
containers work.
Usage
To see how to use cage
, run cage
with no arguments. It supports a
fairly long list of subcommands:
SUBCOMMANDS:
build Build images for the containers associated with this
project
exec Run a command inside an existing container
export Export project as flattened *.yml files
generate Commands for generating new source files
help Prints this message or the help of the given subcommand(s)
new Create a directory containing a new project
pull Build images for the containers associated with this
project
repo Commands for working with git repositories
run Run a specific pod as a one-shot task
shell Run an interactive shell inside a running container
stop Stop all containers associated with project
sysinfo Print information about the system
test Run the tests associated with a service, if any
up Run project
What's a pod?
A "pod" is a tightly-linked group of containers that are always deployed together. Kubernetes defines pods as:
A pod (as in a pod of whales or pea pod) is a group of one or more containers (such as Docker containers), the shared storage for those containers, and options about how to run the containers. Pods are always co-located and co-scheduled, and run in a shared context. A pod models an application-specific “logical host” - it contains one or more application containers which are relatively tightly coupled — in a pre-container world, they would have executed on the same physical or virtual machine.
If you're using Amazon's ECS, a pod corresponds to an ECS "task" or
"service". If you're using Docker Swarm, a pod corresponds to a single
docker-compose.xml
file full of services that you always launch as a
single unit.
Pods typically talk to other pods using ordinary DNS lookups or service discovery. If a pod accepts outside network connections, it will often do so via a load balancer.
Project format
See examples/hello
for a complete example.
hello
└── pods
├── common.env
├── frontend.yml
└── overrides
├── development
│ └── common.env
├── production
│ ├── common.env
│ └── frontend.yml
└── test
└── common.env
Reporting issues
If you encounter an issue, it might help to set the following shell variables and re-run the command:
Development notes
Pull requests are welcome! If you're not sure whether your idea would fit into the project's vision, please feel free to file an issue and ask us.
Setting up tools
When working on this code, we recommend installing the following support tools:
We also recommend installing nightly Rust, which produces better error messages and supports extra warnings using Clippy:
If nightly
produces build errors, you may need to update your compiler
and libraries to the latest versions:
If that still doesn't work, try stable
:
If you're using nightly
, run the following in a terminal as you edit:
If you're using stable
, leave out --no-default-features --features unstable
:
Before committing your code, run:
This will automatically reformat your code according to the project's
conventions. We use Travis CI to verify that cargo fmt
has been run and
that the project builds with no warnings. If it fails, no worries—just go
ahead and fix your pull request, or ask us for help.
Official releases
To make an official release, you need to be a maintainer, and you need to
have cargo publish
permissions. If this is the case, first edit
Cargo.toml
to bump the version number, then regenerate Cargo.lock
using:
Commit the release, using a commit message of the format:
v<VERSION>: <SUMMARY>
<RELEASE NOTES>
Then run:
cargo publish
git tag v$VERSION
git push; git push --tags
This will rebuild the official binaries using Travis CI, and upload a new version of the crate to crates.io.