Skip to content

Opinionated CNCF-based, Docker Compose setup for everything needed to develop a 12factor app

License

Notifications You must be signed in to change notification settings

OneCricketeer/gryllidae

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Gryllidae 🦗

An (opininated) collection of open-source CNCF-based Docker services that assist in making 12factor applications.

Future work - Contribute to awesome-compose.

Table of Contents

What you get

Any passwords for services are instrument. For example, Grafana creds are admin:instrument. InfluxDB can be queried using instrument as the password.

What you end up with

consul

logspout

traefik dash

traefik routes

Docker dashboard - id:893

grafana-docker

Telegraf dashboard - id:928, id:5955

grafana-telegraf

jaeger

Getting Started

First, download this repo as a ZIP (use the clone button) and extract it as a folder into your project as .instrument.

Next, create a Docker network for the components

docker network create instrument_web

Then, if you are using your own Makefile, then add the .instrument make targets to it

echo -e '\ninclude .instrument/targets.mk' >> Makefile

Otherwise, since we've provided make targets for you, go ahead and create your own Makefile. (Trust me, using one is nicer than memorizing a bunch of Docker commands)

Here's a starting template. Note: clean and install should be updated to actually do things that are dependent on your own code. Also important: tabs matter when updating a Makefile.

install:
	@echo "installing!"

clean:
	@echo "cleaning!"

include .instrument/targets.mk

Alright, alright, alright!

Having the infrastructure in place is great, but it doesn't help, you, the application developer, ensure your app will run on these services. In order to test your own app in this environment, make your own docker-compose.yml file

Here's a starting template

version: '3'
networks:
  instrument_web:
    external: true

## Update here
services:
  app:
    image: containous/whoami  # Replace with your own image
    ports:  # Update with your own ports
      - "8081:80"
    networks: ['instrument_web']  # This attaches to the underlying infrastructure network
    environment:  # Update with your environment
      FOO: bar

Next, add additional services that specify:

  1. Any dependent services (such as databases, Kafka, etc.). Make sure to only copy the internal sections of any services block.

  2. (Optional) Any links to existing, external services.

    If you use a remote service over the network, it is up to you to ensure you have the appropriate network connectivity and firewall options from your machine to those.

    Best practices say to configure such connections via the environment block of Compose or in-app config wiring.

  3. Externalized secrets

    Note: For simplicity, Hashicorp Vault is excluded from this stack.

    Docker Compose can reference a .env file, should you need local credentials, otherwise use dummy credentials for test databases and such.

    No one responsible leaking access credentials in Git repos but yourself._

    # add to your gitignore
    echo -e '\n.env' >> .gitignore
    
    vim .env

Are we there, yet?

YES!!!

With all that in place, write in your services (refer to Compose docs above as needed), then get ready to run your application(s)!

make ult-instrument

This will run until stopped via Ctrl + C.

Extending

Hopefully the sevices listed above in what you get are enough. Of course, feel free to mix-and-match with what you think is necessary.

Troubleshooting

It doesn't seem to work

Make sure you have the following file structure. Any extra files should include documentation and your local application code + build processes. As mentioned below, this has mostly been tested with Apache Maven, but NPM, or similar tooling could be build around this process.

.instrument/
  conf/
    grafana/
    telegraf/
      telegraf.conf
  targets.mk
  docker-compose.yml
Makefile
docker-compose.yml

What is my image?

Are you stuck here?

version: '3'
services:
   myapp:
      image: ???

You can either pull an image directly off Docker Hub, or more commonly, you are in development mode, and you are testing services locally. When an image is local, you can find it with docker images.

When using any Docker image, the full image name would look like

[docker-registry]/[git-org]/[image-name]:[image-version]

Where each part of a full Docker image reference are:

  1. (optional) Docker Registry
  2. (optional) Docker Org/User
  3. (required) Docker Image
  4. (preferred) Image version

Without a registry specified, the default is Docker Hub. Use docker images to see what images are already downloaded on your local machine. Creating a Docker accont is free, and will let you create your own Docker Org/User where you can push images for others to use.

If you exclude the image version, then it defaults to latest. Best practices of Docker say to always use a defined version. Preferabbly SemVer, by which the maven-release-plugin can generate alongside the Fabric8 docker-maven-plugin.

Wait... Apache Maven?

Yes, you heard me right... Read on.

I only use Dockerfile, not Maven, what now?

Maven is not only for Java apps! You will need Java installed, but the featureset of Maven outweighs that burden.

As mentioned, the Fabric8 plugin works fine and has been tested with this project, so refer its documentation for configuration options. In general, it works similarly to the maven-assembly-plugin in that it bundles up the final build artifacts into a Docker image.

Other options for building Docker images from Maven include

If you find Gradle, SBT, or another build tool works better for you, feel free to let us know.

What is going wrong? Everything is falling apart!

Relax. Breeeaathee.

If everything started okay, logs from stdout / stderr of all the services will be tail'd. They can also be followed in another tab (termainal or browser) via curl http://localhost:8000/logs. Refer Logspout documentation on performing filters. Of course, grep works great here too.

Should a container die, you'll need to debug it.

Useful commands:

  • docker-compose ps - See what's running (must be ran in same folder as the compose file)
  • docker-compose logs <name> - dump the logs of that image. Include logs -f <name> to follow the logs.
  • docker-compose exec <name> bash - can be used to shell into a container to inspect files and processes like any other terminal session.

Extras

I really like Minikube/Minishift and Helm

They are nice, sure, but Kube YAML is needlessly verbose for a local environment.

Releases

No releases published

Packages

No packages published