Skip to content

tdsacilowski/azure-demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multi-Region Consul + Nomad in Azure

NOTE: Still a work in progress...

  • Use the Azure portal to create an Active Directory application and service principal that can access resources (this is for the Azure CLI)
    • See here for the steps necessary to complete this.
    • Additional information related to setting up the Terraform Microsoft Azure Provider can be found here.
    • Credentials should be provided via the ARM_SUBSCRIPTION_ID, ARM_CLIENT_ID, ARM_CLIENT_SECRET and ARM_TENANT_ID environment variables.
    • When assigning a role to your application, make sure to use something that has the appropriate access (instructions show "Reader" but you'll likely need a role with more privileges). See here for available built-in roles.
  • This project contains two sub-projects:
    • base-infrastructure: creates the Azure Resource Group, Storage Accounts, Virtual Networks/Subnets, a Bastion host, and runs a remote-exec on the Bastion that uses the Azure CLI to create VN gateways and VPN connections.
    • consul-nomad-clusters: creates the Consul/Nomad clusters across the desired regions (assumes base-infrastructure exists).

Steps to deploy:

  1. Create the Azure Service Principal and associated credentials as described above.
  2. Change to the base-infrastructure directory and run terraform plan & terraform apply
  3. Go grab some coffee or a beer (or two, or three). Wait roughly ~30 minutes as TF deploys the base infrastructure and the Azure CLI creates your VNet gateways (this latter step is the long running process).
  4. Once the base-infrastructure is provisioned, switch to the consul-nomad-clusters directory and run terraform plan & terraform apply
  5. Once your clusters have been provisioned you can ssh into one (or all) of them to check your cluster status:
    • Consul: consul members -wan
    • Nomad: nomad server-members and also nomad node-status

One caveat:

  • You can use terraform destroy from the consul-nomad-cluster directory to tear down your clusters. However, this doesn't work properly for the bast-infrastructure sub-project because we're building some resources manually via the Azure CLI. The easiest way to tear down the environment in this case is to go to the Azure Portal and simply delete the Resource Group, which will delete all associated resources.
    • NOTE: if you take this approach, make sure to delete your terraform.tfstate*.

There's still a bunch of refactoring to do here...

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published