Continuous deployment production environments, built on Docker, CoreOS, etcd and fleet.
Paz is a pluggable in-house service platform with a PaaS-like workflow.
- Beautiful web UI
- Run anywhere (Vagrant, public cloud or bare metal)
- No special code required in your services
- Built for Continuous Deployment
- Zero-downtime deployments
- Service discovery
- Same workflow from dev to production
- Easy environments
- Web front-end – A beautiful UI for configuring and monitoring your services.
- Service directory – A catalog of your services and their configuration.
- Scheduler – Deploys services onto the platform.
- Orchestrator – REST API used by the web front-end; presents a unified subset of functionality from Scheduler, Service Directory, Fleet and Etcd.
- Centralised monitoring and logging.
This is a database of all your services and their configuration (e.g. environment variables, data volumes, port mappings and the number of instances to launch). Ultimately this information will be reduced to a set of systemd unit files (by the scheduler) to be submitted to Fleet for running on the cluster.
This service has a REST API and is backed by a database (LevelDB).
This service receives HTTP POST commands to deploy services that are defined in the service directory. Using the service data from the directory it will render unit files and run them on the CoreOS cluster using Fleet. A history of deployments and associated config is also available from the scheduler.
For each service the scheduler will deploy a container for the service and an announce sidekick container.
This is a service that ties all of the other services together, providing a single access-point for the front-end to interface with. Also offers a web socket endpoint for realtime updates to the web front-end.
A beautiful and easy-to-use web UI for managing your services and observing the health of your cluster. Built in Ember.
Currently cAdvisor is used for monitoring, and there is no centralised logging. WIP.
Paz’s Docker repositories are hosted at Quay.io, but they are public so you don’t need any credentials.
You will need to install
etcdctl. On OS/X you can install both with brew:
$ brew install etcdctl fleetctl
Clone this repository and run the following from the root directory of this repository:
This will bring up a three-node CoreOS Vagrant cluster and install Paz on it. Note that it may take 10 minutes or more to complete.
For extra debug output, run with
DEBUG=1 environment variable set.
If you already have a Vagrant cluster running and want to reinstall the units, use:
To interact with the units in the cluster via Fleet, just specify the URL to Etcd on one of your hosts as a parameter to Fleet. e.g.:
$ fleetctl -strict-host-key-checking=false -endpoint=http://172.17.8.101:4001 list-units
You can also SSH into one of the VMs and run
fleetctl from there:
$ cd coreos-vagrant $ vagrant ssh core-01
…however bear in mind that Fleet needs to SSH into the other VMs in order to perform operations that involve calling down to systemd (e.g.
journal), and for this you need to have SSHd into the VM running the unit in question. For this reason you may find it simpler (albeit more verbose) to run
fleetctl from outside the CoreOS VMs.
Paz has been tested on Digital Ocean but there isn’t currently an install script for it. It shouldn’t take much, just be sure to edit the PAZ_DNSIMPLE_* values in
digitalocean/user-data. Stay tuned…
There is an integration test that brings up a CoreOS Vagrant cluster, installs Paz and then runs a contrived service on it and verifies that it works:
$ cd test $ ./integration.sh
Each paz repository (service directory, orchestrator, scheduler) has tests that run on paz-ci.yld.io (in StriderCD), triggered by a Github webhook.
The various components of Paz are spread across several repositories:
Original URL: http://feedproxy.google.com/~r/feedsapi/BwPx/~3/dq8zVKVGJAE/paz