Skip to content

Latest commit

 

History

History
420 lines (303 loc) · 20.6 KB

File metadata and controls

420 lines (303 loc) · 20.6 KB

3-networks-hub-and-spoke

This repo is part of a multi-part guide that shows how to configure and deploy the example.com reference architecture described in Google Cloud security foundations guide. The following table lists the parts of the guide.

0-bootstrap Bootstraps a Google Cloud organization, creating all the required resources and permissions to start using the Cloud Foundation Toolkit (CFT). This step also configures a CI/CD Pipeline for foundations code in subsequent stages.
1-org Sets up top level shared folders, monitoring and networking projects, and organization-level logging, and sets baseline security settings through organizational policy.
2-environments Sets up development, nonproduction, and production environments within the Google Cloud organization that you've created.
3-networks-dual-svpc Sets up base and restricted shared VPCs with default DNS, NAT (optional), Private Service networking, VPC service controls, on-premises Dedicated Interconnect, and baseline firewall rules for each environment. It also sets up the global DNS hub.
3-networks-hub-and-spoke (this file) Sets up base and restricted shared VPCs with all the default configuration found on step 3-networks-dual-svpc, but here the architecture will be based on the Hub and Spoke network model. It also sets up the global DNS hub
4-projects Sets up a folder structure, projects, and application infrastructure pipeline for applications, which are connected as service projects to the shared VPC created in the previous stage.
5-app-infra Deploy a simple Compute Engine instance in one of the business unit projects using the infra pipeline set up in 4-projects.

For an overview of the architecture and the parts, see the terraform-example-foundation README.

Purpose

The purpose of this step is to:

  • Set up the global DNS Hub.
  • Set up base and restricted Hubs and it corresponding Spokes. With default DNS, NAT (optional), Private Service networking, VPC service controls, on-premises Dedicated or Partner Interconnect, and baseline firewall rules for each environment.

Prerequisites

  1. 0-bootstrap executed successfully.

  2. 1-org executed successfully.

  3. 2-environments executed successfully.

  4. Obtain the value for the access_context_manager_policy_id variable. It can be obtained by running the following commands. We assume you are at the same level as directory terraform-example-foundation, If you run them from another directory, adjust your paths accordingly.

    export ORGANIZATION_ID=$(terraform -chdir="terraform-example-foundation/0-bootstrap/" output -json common_config | jq '.org_id' --raw-output)
    export ACCESS_CONTEXT_MANAGER_ID=$(gcloud access-context-manager policies list --organization ${ORGANIZATION_ID} --format="value(name)")
    echo "access_context_manager_policy_id = ${ACCESS_CONTEXT_MANAGER_ID}"
  5. For the manual step described in this document, you need Terraform version 1.3.0 or later to be installed.

Note: Make sure that you use version 1.3.0 or later of Terraform throughout this series. Otherwise, you might experience Terraform state snapshot lock errors.

Troubleshooting

Please refer to troubleshooting if you run into issues during this step.

Usage

Note: If you are using MacOS, replace cp -RT with cp -R in the relevant commands. The -T flag is needed for Linux, but causes problems for MacOS.

Networking Architecture

This step uses the Hub and Spoke architecture mode. More details can be found at the Networking section of the Google cloud security foundations guide.

Hub and Spoke transitivity can be used to deploy network virtual appliances (NVAs) on the hub Shared VPC that act as gateways for the spoke-to-spoke traffic to allow connectivity across environments. To enable Hub and Spoke transitivity set the variable enable_hub_and_spoke_transitivity to true.

Note: The default allow-transitivity-ingress firewall rule will create Security Command Center (SCC) findings because it allows ingress for all ports and protocols in the Shared Address Space CIDR Block set in this rule. Because of this, you should update the implemented network access controls between spokes with valid values for your environment through the firewall functionality of the corresponding NVAs to make them more restrictive.

To see the version that makes use of the Dual Shared VPC architecture mode check the step 3-networks-dual-svpc.

Using Dedicated Interconnect

If you provisioned the prerequisites listed in the Dedicated Interconnect README, follow these steps to enable Dedicated Interconnect to access on-premises resources.

  1. Rename interconnect.tf.example to interconnect.tf in the shared envs folder in 3-networks-hub-and-spoke/envs/shared.
  2. Rename interconnect.auto.tfvars.example to interconnect.auto.tfvars in the shared envs folder in 3-networks-hub-and-spoke/envs/shared.
  3. Update the file interconnect.tf with values that are valid for your environment for the interconnects, locations, candidate subnetworks, vlan_tag8021q and peer info.
  4. The candidate subnetworks and vlan_tag8021q variables can be set to null to allow the interconnect module to auto generate these values.

Using Partner Interconnect

If you provisioned the prerequisites listed in the Partner Interconnect README follow this steps to enable Partner Interconnect to access on-premises resources.

  1. Rename partner_interconnect.tf.example to partner_interconnect.tfin the shared envs folder in 3-networks-hub-and-spoke/envs/shared.
  2. Rename partner_interconnect.auto.tfvars.example to partner_interconnect.auto.tfvars in the shared envs folder in 3-networks-hub-and-spoke/envs/shared.
  3. Update the file partner_interconnect.tf with values that are valid for your environment for the VLAN attachments, locations, and candidate subnetworks.
  4. The candidate subnetworks variable can be set to null to allow the interconnect module to auto generate this value.

OPTIONAL - Using High Availability VPN

If you are not able to use Dedicated or Partner Interconnect, you can also use an HA Cloud VPN to access on-premises resources.

  1. Rename vpn.tf.example to vpn.tf in base-env folder in 3-networks-hub-and-spoke/modules/base_env.

  2. Create secret for VPN private pre-shared key and grant required roles to Networks terraform service account.

    echo '<YOUR-PRESHARED-KEY-SECRET>' | gcloud secrets create <VPN_PRIVATE_PSK_SECRET_NAME> --project <ENV_SECRETS_PROJECT> --replication-policy=automatic --data-file=-
    
    gcloud secrets add-iam-policy-binding <VPN_PRIVATE_PSK_SECRET_NAME> --member='serviceAccount:<NETWORKS_TERRAFORM_SERVICE_ACCOUNT>' --role='roles/secretmanager.viewer' --project <ENV_SECRETS_PROJECT>
    gcloud secrets add-iam-policy-binding <VPN_PRIVATE_PSK_SECRET_NAME> --member='serviceAccount:<NETWORKS_TERRAFORM_SERVICE_ACCOUNT>' --role='roles/secretmanager.secretAccessor' --project <ENV_SECRETS_PROJECT>
  3. Create secret for VPN restricted pre-shared key and grant required roles to Networks terraform service account.

    echo '<YOUR-PRESHARED-KEY-SECRET>' | gcloud secrets create <VPN_RESTRICTED_PSK_SECRET_NAME> --project <ENV_SECRETS_PROJECT> --replication-policy=automatic --data-file=-
    
    gcloud secrets add-iam-policy-binding <VPN_RESTRICTED_PSK_SECRET_NAME> --member='serviceAccount:<NETWORKS_TERRAFORM_SERVICE_ACCOUNT>' --role='roles/secretmanager.viewer' --project <ENV_SECRETS_PROJECT>
    gcloud secrets add-iam-policy-binding <VPN_RESTRICTED_PSK_SECRET_NAME> --member='serviceAccount:<NETWORKS_TERRAFORM_SERVICE_ACCOUNT>' --role='roles/secretmanager.secretAccessor' --project <ENV_SECRETS_PROJECT>
  4. In the file vpn.tf, update the values for environment, vpn_psk_secret_name, on_prem_router_ip_address1, on_prem_router_ip_address2 and bgp_peer_asn.

  5. Verify other default values are valid for your environment.

Deploying with Cloud Build

  1. Clone the gcp-networks repo based on the Terraform output from the 0-bootstrap step. Clone the repo at the same level of the terraform-example-foundation folder, the following instructions assume this layout. Run terraform output cloudbuild_project_id in the 0-bootstrap folder to get the Cloud Build Project ID.

    export CLOUD_BUILD_PROJECT_ID=$(terraform -chdir="terraform-example-foundation/0-bootstrap/" output -raw cloudbuild_project_id)
    echo ${CLOUD_BUILD_PROJECT_ID}
    
    gcloud source repos clone gcp-networks --project=${CLOUD_BUILD_PROJECT_ID}
  2. Change to the freshly cloned repo, change to the non-main branch and copy contents of foundation to new repo.

    cd gcp-networks/
    git checkout -b plan
    
    cp -RT ../terraform-example-foundation/3-networks-hub-and-spoke/ .
    cp ../terraform-example-foundation/build/cloudbuild-tf-* .
    cp ../terraform-example-foundation/build/tf-wrapper.sh .
    chmod 755 ./tf-wrapper.sh
  3. Rename common.auto.example.tfvars to common.auto.tfvars, rename shared.auto.example.tfvars to shared.auto.tfvars and rename access_context.auto.example.tfvars to access_context.auto.tfvars.

    mv common.auto.example.tfvars common.auto.tfvars
    mv shared.auto.example.tfvars shared.auto.tfvars
    mv access_context.auto.example.tfvars access_context.auto.tfvars
  4. Update common.auto.tfvars file with values from your environment and bootstrap. See any of the envs folder README.md files for additional information on the values in the common.auto.tfvars file. Update shared.auto.tfvars file with the target_name_server_addresses. Update access_context.auto.tfvars file with the access_context_manager_policy_id. Use terraform output to get the backend bucket value from 0-bootstrap output.

    export ORGANIZATION_ID=$(terraform -chdir="../terraform-example-foundation/0-bootstrap/" output -json common_config | jq '.org_id' --raw-output)
    export ACCESS_CONTEXT_MANAGER_ID=$(gcloud access-context-manager policies list --organization ${ORGANIZATION_ID} --format="value(name)")
    echo "access_context_manager_policy_id = ${ACCESS_CONTEXT_MANAGER_ID}"
    
    sed -i'' -e "s/ACCESS_CONTEXT_MANAGER_ID/${ACCESS_CONTEXT_MANAGER_ID}/" ./access_context.auto.tfvars
    
    export backend_bucket=$(terraform -chdir="../terraform-example-foundation/0-bootstrap/" output -raw gcs_bucket_tfstate)
    echo "remote_state_bucket = ${backend_bucket}"
    
    sed -i'' -e "s/REMOTE_STATE_BUCKET/${backend_bucket}/" ./common.auto.tfvars

    Note: Make sure that you update the perimeter_additional_members variable with your e-mail in order to be able to view/access resources in the project protected by the VPC service controls.

  5. Commit changes

    git add .
    git commit -m 'Initialize networks repo'
  6. You must manually plan and apply the shared environment (only once) since the development, nonproduction and production environments depend on it.

  7. To use the validate option of the tf-wrapper.sh script, please follow the instructions to install the terraform-tools component.

  8. Use terraform output to get the Cloud Build project ID and the networks step Terraform Service Account from 0-bootstrap output. An environment variable GOOGLE_IMPERSONATE_SERVICE_ACCOUNT will be set using the Terraform Service Account to enable impersonation.

    export CLOUD_BUILD_PROJECT_ID=$(terraform -chdir="../terraform-example-foundation/0-bootstrap/" output -raw cloudbuild_project_id)
    echo ${CLOUD_BUILD_PROJECT_ID}
    
    export GOOGLE_IMPERSONATE_SERVICE_ACCOUNT=$(terraform -chdir="../terraform-example-foundation/0-bootstrap/" output -raw networks_step_terraform_service_account_email)
    echo ${GOOGLE_IMPERSONATE_SERVICE_ACCOUNT}
  9. Run init and plan and review output for environment shared.

    ./tf-wrapper.sh init shared
    ./tf-wrapper.sh plan shared
  10. Run validate and check for violations.

    ./tf-wrapper.sh validate shared $(pwd)/../gcp-policies ${CLOUD_BUILD_PROJECT_ID}
  11. Run apply shared.

    ./tf-wrapper.sh apply shared
  12. Push your plan branch to trigger a plan for all environments. Because the plan branch is not a named environment branch), pushing your plan branch triggers terraform plan but not terraform apply. Review the plan output in your Cloud Build project https://console.cloud.google.com/cloud-build/builds;region=DEFAULT_REGION?project=YOUR_CLOUD_BUILD_PROJECT_ID

    git push --set-upstream origin plan
  13. Merge changes to production. Because this is a named environment branch, pushing to this branch triggers both terraform plan and terraform apply. Review the apply output in your Cloud Build project https://console.cloud.google.com/cloud-build/builds;region=DEFAULT_REGION?project=YOUR_CLOUD_BUILD_PROJECT_ID

    git checkout -b production
    git push origin production
  14. After production has been applied, apply development.

  15. Merge changes to development. Because this is a named environment branch, pushing to this branch triggers both terraform plan and terraform apply. Review the apply output in your Cloud Build project https://console.cloud.google.com/cloud-build/builds;region=DEFAULT_REGION?project=YOUR_CLOUD_BUILD_PROJECT_ID

    git checkout -b development
    git push origin development
  16. After development has been applied, apply nonproduction.

  17. Merge changes to nonproduction. Because this is a named environment branch, pushing to this branch triggers both terraform plan and terraform apply. Review the apply output in your Cloud Build project https://console.cloud.google.com/cloud-build/builds;region=DEFAULT_REGION?project=YOUR_CLOUD_BUILD_PROJECT_ID

    git checkout -b nonproduction
    git push origin nonproduction
  18. Before executing the next steps, unset the GOOGLE_IMPERSONATE_SERVICE_ACCOUNT environment variable.

    unset GOOGLE_IMPERSONATE_SERVICE_ACCOUNT
  19. You can now move to the instructions in the 4-projects step.

Deploying with Jenkins

See 0-bootstrap README-Jenkins.md.

Deploying with GitHub Actions

See 0-bootstrap README-GitHub.md.

Run Terraform locally

  1. The next instructions assume that you are at the same level of the terraform-example-foundation folder. Change into 3-networks-hub-and-spoke folder, copy the Terraform wrapper script and ensure it can be executed.

    cd terraform-example-foundation/3-networks-hub-and-spoke
    cp ../build/tf-wrapper.sh .
    chmod 755 ./tf-wrapper.sh
  2. Rename common.auto.example.tfvars to common.auto.tfvars, rename shared.auto.example.tfvars to shared.auto.tfvars and rename access_context.auto.example.tfvars to access_context.auto.tfvars.

    mv common.auto.example.tfvars common.auto.tfvars
    mv shared.auto.example.tfvars shared.auto.tfvars
    mv access_context.auto.example.tfvars access_context.auto.tfvars
  3. Update common.auto.tfvars file with values from your environment and bootstrap. See any of the envs folder README.md files for additional information on the values in the common.auto.tfvars file.

  4. Update shared.auto.tfvars file with the target_name_server_addresses.

  5. Update access_context.auto.tfvars file with the access_context_manager_policy_id.

  6. Use terraform output to get the backend bucket value from 0-bootstrap output.

    export ORGANIZATION_ID=$(terraform -chdir="../0-bootstrap/" output -json common_config | jq '.org_id' --raw-output)
    export ACCESS_CONTEXT_MANAGER_ID=$(gcloud access-context-manager policies list --organization ${ORGANIZATION_ID} --format="value(name)")
    echo "access_context_manager_policy_id = ${ACCESS_CONTEXT_MANAGER_ID}"
    
    sed -i'' -e "s/ACCESS_CONTEXT_MANAGER_ID/${ACCESS_CONTEXT_MANAGER_ID}/" ./access_context.auto.tfvars
    
    export backend_bucket=$(terraform -chdir="../0-bootstrap/" output -raw gcs_bucket_tfstate)
    echo "remote_state_bucket = ${backend_bucket}"
    
    sed -i'' -e "s/REMOTE_STATE_BUCKET/${backend_bucket}/" ./common.auto.tfvars

We will now deploy each of our environments(development/production/nonproduction) using this script. When using Cloud Build or Jenkins as your CI/CD tool each environment corresponds to a branch in the repository for 3-networks-hub-and-spoke step and only the corresponding environment is applied.

To use the validate option of the tf-wrapper.sh script, please follow the instructions to install the terraform-tools component.

  1. Use terraform output to get the Cloud Build project ID and the networks step Terraform Service Account from 0-bootstrap output. An environment variable GOOGLE_IMPERSONATE_SERVICE_ACCOUNT will be set using the Terraform Service Account to enable impersonation.

    export CLOUD_BUILD_PROJECT_ID=$(terraform -chdir="../0-bootstrap/" output -raw cloudbuild_project_id)
    echo ${CLOUD_BUILD_PROJECT_ID}
    
    export GOOGLE_IMPERSONATE_SERVICE_ACCOUNT=$(terraform -chdir="../0-bootstrap/" output -raw networks_step_terraform_service_account_email)
    echo ${GOOGLE_IMPERSONATE_SERVICE_ACCOUNT}
  2. Run init and plan and review output for environment shared.

    ./tf-wrapper.sh init shared
    ./tf-wrapper.sh plan shared
  3. Run validate and check for violations.

    ./tf-wrapper.sh validate shared $(pwd)/../policy-library ${CLOUD_BUILD_PROJECT_ID}
  4. Run apply shared.

    ./tf-wrapper.sh apply shared
  5. Run init and plan and review output for environment production.

    ./tf-wrapper.sh init production
    ./tf-wrapper.sh plan production
  6. Run validate and check for violations.

    ./tf-wrapper.sh validate production $(pwd)/../policy-library ${CLOUD_BUILD_PROJECT_ID}
  7. Run apply production.

    ./tf-wrapper.sh apply production
  8. Run init and plan and review output for environment nonproduction.

    ./tf-wrapper.sh init nonproduction
    ./tf-wrapper.sh plan nonproduction
  9. Run validate and check for violations.

    ./tf-wrapper.sh validate nonproduction $(pwd)/../policy-library ${CLOUD_BUILD_PROJECT_ID}
  10. Run apply nonproduction.

    ./tf-wrapper.sh apply nonproduction
  11. Run init and plan and review output for environment development.

    ./tf-wrapper.sh init development
    ./tf-wrapper.sh plan development
  12. Run validate and check for violations.

    ./tf-wrapper.sh validate development $(pwd)/../policy-library ${CLOUD_BUILD_PROJECT_ID}
  13. Run apply development.

    ./tf-wrapper.sh apply development

If you received any errors or made any changes to the Terraform config or any .tfvars, you must re-run ./tf-wrapper.sh plan <env> before run ./tf-wrapper.sh apply <env>.

Before executing the next stages, unset the GOOGLE_IMPERSONATE_SERVICE_ACCOUNT environment variable.

unset GOOGLE_IMPERSONATE_SERVICE_ACCOUNT