Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker_container: Provider produced inconsistent final plan #408

Closed
sreboot opened this issue Jul 14, 2022 · 7 comments · Fixed by #421
Closed

docker_container: Provider produced inconsistent final plan #408

sreboot opened this issue Jul 14, 2022 · 7 comments · Fixed by #421
Labels
bug Something isn't working r/container Relates to the container resource

Comments

@sreboot
Copy link

sreboot commented Jul 14, 2022

Terraform (and docker Provider) Version

$ terraform -v
Terraform v1.2.4
on openbsd_amd64
+ provider registry.terraform.io/kreuzwerker/docker v2.18.0

Affected Resource(s)

  • docker_container
  • docker_image

Terraform Configuration Files

locals {
  shortid = substr(uuid(), 0, 8)
  mod_labels = ["com.docker.swarm.affinities", "triton.cns.services"]
}

resource "docker_container" "instance" {
  name       = "${var.hostname}${format("%02d", count.index + 1)}-${substr(uuidv5("dns", "${var.hostname}${format("%02d", count.index + 1)}${local.shortid}"), 0, 8)}"
  hostname   = "${var.hostname}${format("%02d", count.index + 1)}-${substr(uuidv5("dns", "${var.hostname}${format("%02d", count.index + 1)}${local.shortid}"), 0, 8)}"
  image      = docker_image.myimage.latest
  count      = var.instances
  must_run   = true
  restart    = "always"
  entrypoint = var.entrypoint
  command    = var.command
  log_driver = var.log_driver
  log_opts   = var.log_opts

  dynamic "labels" {
    for_each = [for l in var.labels : {
      label = l.label
      value = l.value
      } if !contains(local.mod_labels, l.label)
    ]

    content {
      label = labels.value.label
      value = labels.value.value
    }
  }

  labels {
    label = "com.docker.swarm.affinities"
    value = var.affinity_group != null ? "[\"affinity_group==${var.affinity_group}${format("%02d", count.index + 1)}\"]" : var.labels.com_docker_swarm_affinities.value
  }

  labels {
    label = "triton.cns.services"
    value = join(",", [var.labels.triton_cns_services.value, "${var.hostname}${format("%02d", count.index + 1)}"])
  }

  env = setunion(["COUNT=${format("%02d", count.index + 1)}", "FHOSTNAME=${var.hostname}${format("%02d", count.index + 1)}"], var.env)

  dynamic "ports" {
    for_each = var.ports

    content {
      internal = ports.key
      external = ports.key
    }
  }

  dynamic "upload" {
    for_each = var.upload_files == null ? {} : var.upload_files

    content {
      content    = upload.value.local_file
      file       = upload.value.remote_file
      executable = upload.value.executable
    }
  }

  lifecycle {
    ignore_changes = [name, hostname]
  }
}

data "docker_registry_image" "myimage" {
  name = var.image
}

resource "docker_image" "myimage" {
  name          = data.docker_registry_image.myimage.name
  keep_locally  = true
  pull_triggers = ["data.docker_registry_image.myimage.sha256_digest"]
}

output "instance_name" {
  value = docker_container.instance.*.name
}

output "ip_address" {
  value = docker_container.instance.*.ip_address
}

output "ports" {
  value = var.ports
}

output "image" {
  value = docker_image.myimage.repo_digest
}

Debug Output

module.consul-exporter.docker_container.instance[0]: Still destroying... [id=95068c1e87714aa38ae0955c2af963def6186004a7d444cd9a644767910e0fe6, 20s elapsed]
module.consul-exporter.docker_container.instance[0]: Still destroying... [id=95068c1e87714aa38ae0955c2af963def6186004a7d444cd9a644767910e0fe6, 30s elapsed]
module.consul-exporter.docker_container.instance[0]: Destruction complete after 36s
module.consul-exporter.docker_image.myimage: Modifying... [id=sha256:83c33a3c475b756146a1959440646e9d0ac1d5244227e712083bb38bdced7f44prom/consul-exporter:v0.7.0]
module.consul-exporter.docker_image.myimage: Modifications complete after 9s [id=sha256:b021157941ca521149329d8ae68ce82a0a303d72825ba0d3a128c068dffa14ccprom/consul-exporter:v0.8.0]
╷
│Error: Provider produced inconsistent final plan
│
│When expanding the plan for
│module.consul-exporter.docker_container.instance[0] to include new values
│learned so far during apply, provider
│"registry.terraform.io/kreuzwerker/docker" produced an invalid new value
│for .image: was
│cty.StringVal("sha256:83c33a3c475b756146a1959440646e9d0ac1d5244227e712083bb38bdced7f44"),
│but now
│cty.StringVal("sha256:b021157941ca521149329d8ae68ce82a0a303d72825ba0d3a128c068dffa14cc").
│
│This is a bug in the provider, which should be reported in the provider's
│own issue tracker.
╵
Error: Terraform exited with code 1.
Error: Process completed with exit code 1.

Panic Output

Expected Behaviour

Redeploy the docker instance with a new docker image.

Actual Behaviour

Results in failed run as described above.

Steps to Reproduce

Only affects deploys where a brand new image is pulled in via pull_trigger.

  1. terraform apply

Important Factoids

References

@Junkern Junkern added bug Something isn't working r/container Relates to the container resource labels Jul 14, 2022
@sreboot
Copy link
Author

sreboot commented Jul 26, 2022

Could be related to this change #212 (comment)

@Junkern
Copy link
Contributor

Junkern commented Jul 28, 2022

I found the underlying issue. It originates in the name attribute of the docker_image.

The name attribute is internally defined with ForceNew: false. So, whenever the name attribute is updated, terraform does not see the new value and issues a "Read" with the old value (.e.g ubuntu:precise) and populates the plan with the old values => docker_container uses the old docker_image.myimage.latest (in your case sha256:83c33a3c475b756146a1959440646e9d0ac1d5244227e712083bb38bdced7f44)

During the tf apply, we then have the new name value (e.g. ubuntu:jammy) which is then used. We get new values for the digest, for the image ID and so on. And then docker_container suddenly gets a new value for docker_image.myimage.latest
The values from plan and the values from the apply differ => obviously terraform complains.

Possible solution:

  • make name attribute from docker_image a ForceNew:true. Whenver its value changes, destroy and recreate the docker_image. This will correctly update all other values. I am not 100% sure, if this is a breaking change...
  • Can't think of anything else, because terraforms lifecycle is quite strict

@sreboot
Copy link
Author

sreboot commented Jul 28, 2022

Nice one @Junkern - I believe there should be no harm flagging the name attribute with ForceNew:true - given it won't break the keep_locally = true functionality.

@sreboot
Copy link
Author

sreboot commented Jul 28, 2022

I've managed to test this quickly with your suggested change. It works as expected in our environment - fixed the issue - and also works in conjunction with the keep_locally = true. I can see both the new and old image being present after the upgrade - as expected.

@sreboot
Copy link
Author

sreboot commented Jul 28, 2022

Attached log output with the change:

  # module.consul-exporter.docker_image.myimage must be replaced
-/+ resource "docker_image" "myimage" {
      ~ id            = "sha256:4f069174c935c49e73bff5200d2b57e6ced520d4f3cc9af7ae6e5f7903f7e2f4prom/consul-exporter:v0.7.1" -> (known after apply)
      ~ latest        = "sha256:4f069174c935c49e73bff5200d2b57e6ced520d4f3cc9af7ae6e5f7903f7e2f4" -> (known after apply)
      ~ name          = "prom/consul-exporter:v0.7.1" -> "prom/consul-exporter:v0.7.0" # forces replacement
      + output        = (known after apply)
      ~ repo_digest   = "prom/consul-exporter@sha256:4f069174c935c49e73bff5200d2b57e6ced520d4f3cc9af7ae6e5f7903f7e2f4" -> (known after apply)
        # (2 unchanged attributes hidden)
    }

Plan: 2 to add, 0 to change, 2 to destroy.

...

module.consul-exporter.docker_container.instance[0]: Still destroying... [id=2887d557527549fc997ecf208474bd43956960547e864c568904566263bbc1bf, 20s elapsed]
module.consul-exporter.docker_container.instance[0]: Still destroying... [id=2887d557527549fc997ecf208474bd43956960547e864c568904566263bbc1bf, 30s elapsed]
module.consul-exporter.docker_container.instance[0]: Destruction complete after 34s
module.consul-exporter.docker_image.myimage: Destroying... [id=sha256:4f069174c935c49e73bff5200d2b57e6ced520d4f3cc9af7ae6e5f7903f7e2f4prom/consul-exporter:v0.7.1]
module.consul-exporter.docker_image.myimage: Destruction complete after 0s
module.consul-exporter.docker_image.myimage: Creating...
module.consul-exporter.docker_image.myimage: Creation complete after 3s [id=sha256:83c33a3c475b756146a1959440646e9d0ac1d5244227e712083bb38bdced7f44prom/consul-exporter:v0.7.0]
module.consul-exporter.docker_container.instance[0]: Creating...
module.consul-exporter.docker_container.instance[0]: Still creating... [10s elapsed]
module.consul-exporter.docker_container.instance[0]: Still creating... [20s elapsed]
module.consul-exporter.docker_container.instance[0]: Still creating... [30s elapsed]
module.consul-exporter.docker_container.instance[0]: Still creating... [40s elapsed]
module.consul-exporter.docker_container.instance[0]: Still creating... [50s elapsed]
module.consul-exporter.docker_container.instance[0]: Creation complete after 57s [id=15945928c47942cf9063272647e26363afb74b64b6cc4f2fa200e43f2f5cda03]
Releasing state lock. This may take a few moments...

Apply complete! Resources: 2 added, 0 changed, 2 destroyed.

@sreboot
Copy link
Author

sreboot commented Aug 9, 2022

Quick question - when is the next release expected?

@Junkern
Copy link
Contributor

Junkern commented Aug 9, 2022

I try to squeeze it in before Thursday, because then I won't have time for more than two weeks :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working r/container Relates to the container resource
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants