Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could we generate random results at plan time rather than apply time? #121

Open
apparentlymart opened this issue Oct 1, 2020 · 7 comments
Labels
enhancement tf-devex-triage Terraform DevEx project tracking

Comments

@apparentlymart
Copy link
Member

When I originally proposed the random provider we didn't have any well-defined mechanism for a provider to generate attribute values at any time other than apply time, but in the meantime we have introduced facilities for providers to be able to populate attribute values at plan time, as long as what they are populating exactly matches what will eventually result from apply.

For resource types like the ones in this provider, where the primary way to use them is to derive other resource configurations from them, it's helpful to have as much as possible populated with known values during planning so that more values from the rest of the configuration can also be known and visible in the plan output, rather than (known after apply).

Given that the random provider is originating all of its results in-process rather than via remote APIs, it seems plausible that it should be able to do the random number generation during the plan phase rather than during the apply phase, and then have the apply phase just commit to the state whatever random result the plan phase chose.

In principle we have the necessary building blocks in place in the current SDK to implement this, but I know that folks working on other providers have indicated that the CustomizeDiff API is hard to use robustly, so it might end up being better to wait until this provider is using a next-generation SDK with a more robust plan customization hook. I'm recording this here now mainly to register the potential use-case as something we might consider when designing that next-generation SDK, although it could also be interesting to prototype this against the legacy SDK and see whether it's robust enough to ship.

@dmikalova
Copy link

This would resolve an issue with using for_each and random_password with the kustomize provider.

@vuskeedoo
Copy link

This issue started happening to me with AWS EKS.

Terraform v1.3.7
on darwin_amd64

  • provider registry.terraform.io/hashicorp/aws v4.53.0
  • provider registry.terraform.io/hashicorp/cloudinit v2.2.0
  • provider registry.terraform.io/hashicorp/helm v2.5.1
  • provider registry.terraform.io/hashicorp/kubernetes v2.16.1
  • provider registry.terraform.io/hashicorp/local v2.3.0
  • provider registry.terraform.io/hashicorp/random v3.4.3
  • provider registry.terraform.io/hashicorp/tls v4.0.4
  • provider registry.terraform.io/spotinst/spotinst v1.97.0

│ Error: Invalid for_each argument

│ on .terraform/modules/example_eks.eks/main.tf line 96, in resource "aws_ec2_tag" "cluster_primary_security_group":
│ 96: for_each = { for k, v in merge(var.tags, var.cluster_tags) :
│ 97: k => v if local.create && k != "Name" && var.create_cluster_primary_security_group_tags && v != null
│ 98: }
│ ├────────────────
│ │ local.create is true
│ │ var.cluster_tags is empty map of string
│ │ var.create_cluster_primary_security_group_tags is true
│ │ var.tags is map of string with 3 elements

│ The "for_each" map includes keys derived from resource attributes that
│ cannot be determined until apply, and so Terraform cannot determine the
│ full set of keys that will identify the instances of this resource.

│ When working with unknown values in for_each, it's better to define the map
│ keys statically in your configuration and place apply-time results only in
│ the map values.

│ Alternatively, you could use the -target planning option to first apply
│ only the resources that the for_each value depends on, and then apply a
│ second time to fully converge.

@bflad bflad added the tf-devex-triage Terraform DevEx project tracking label Mar 6, 2023
@bflad
Copy link
Member

bflad commented Mar 6, 2023

There is a growing satellite of higher 👍 downstream resource issues, e.g. hashicorp/terraform-provider-aws#19583, that would benefit from this enhancement. While there may be somethings those downstream resources could do to help prevent problems, if they're using terraform-plugin-sdk, they are limited in their abilities to handle the situations appropriate because that SDK tends to hide or abstract value details in unfortunate ways. Especially in larger providers that will take a long time to migrate completely to terraform-plugin-framework, let alone have practitioners upgrade to those migrated resources, we probably should handle this here as it may be a quicker path to resolution for some practitioners.

One holdup from before was that this provider was also using terraform-plugin-sdk, which was awkward to work with in these situations. It's now been migrated to terraform-plugin-framework. I'm queuing this up for maintainer discussion today although I cannot guarantee any particular implementation timeline.

@koslowdavida
Copy link

Seconding calculating random values at apply

@brittandeyoung
Copy link

brittandeyoung commented Nov 19, 2023

@apparentlymart and @bflad I have started a WIP pull request to implement generating the id for the random_pet resource so that the value will be known at plan time. I was able to successfully move the id generation to a plan modifier for the id schema field. The issue I am running into is that per the Terraform Resource Instance Change Lifecycle
docs, the PlanResourceChange function actually runs twice. It also expects that all schema fields that were known in the initial plan match the result in the second plan. This is causing the following error when I attempt to test the resource as a new ID is being generated for each call:

➜ terraform-provider-random (f-pet-generate-on-plan) ✗ go test -v -timeout=120s -parallel=4 ./internal/provider/... -run=TestAccResourcePet_Prefix
=== RUN   TestAccResourcePet_Prefix
    resource_pet_test.go:752: Step 1/1 error: Error running apply: exit status 1
        
        Error: Provider produced inconsistent final plan
        
        When expanding the plan for random_pet.pet_1 to include new values learned so
        far during apply, provider "registry.terraform.io/hashicorp/random" produced
        an invalid new value for .id: was cty.StringVal("consul-immune-loon"), but
        now cty.StringVal("consul-touched-drum").
        
        This is a bug in the provider, which should be reported in the provider's own
        issue tracker.
--- FAIL: TestAccResourcePet_Prefix (0.34s)
FAIL
FAIL    github.com/terraform-providers/terraform-provider-random/internal/provider      0.896s
FAIL

I have attempted two things to potentialy save the initial generated id for the second run to use:

  1. Save the generated id to context. Then read the id from context the second run. ( This is currently what you will see in my PR)
  2. (I know this was a long stretch) Save the generated id to privateState. Then read the id from PrivateState the second run.

Neither of these methods seem to keep the value around for the second run. From the return values of the planmodifiers I do not see any other method to save the generated id to for later retrieval as it does not look like state can be updated during planmodifiers. Any pointers on something I am missing? or did I hit a brick wall?

@TJNII
Copy link

TJNII commented Feb 4, 2024

I'd like to see an option to generate the value at plan. I have a troubleshooting case where I'd like to use the random output in a map key fed to for_each, so I can quickly iterate on a resource that's slow to destroy and very fussy about it's overall configuration. There's workarounds for my case, but none that don't involve multi-pass applies or needing to edit code to re-roll the resources.

A generate_at_plan = true flag would be helpful where random keys in for_each configurations are desirable, I could see this being useful for both corner-case config to work around issues and security configurations designed to re-roll resources that shouldn't be long lived (certs/keys/what-have-you).

@apparentlymart
Copy link
Member Author

Hi all! I was away from this for a long time due to working on other things, but encountered a new instance of this problem today so came back to see if anything new was here!

@brittandeyoung, I think what you've found here is indeed an important blocker for what I was considering when I wrote this issue. Terraform Core doesn't currently let providers refer back to their initial plan when building the final plan, instead expecting them to be able to reconstruct the same results from the final configuration.

I think when I originally wrote this up I was thinking about the resource types that support specifying a fixed seed, like random_shuffle and random_integer, which can therefore guarantee to generate the same result given the same input. For the resource types that derive their randomness "ambiently" (e.g. from the operating system's strong RNG) this would not be possible, because the plan is derived from something outside of the configuration.

The random_string resource type is, I believe, the one most likely to benefit from plan-time generation, but that one doesn't support a fixed seed because it uses a crypto random number generator. I thought random_pet supported using a fixed seed, but referring to the docs I see that I misremembered that.

Given all this, I don't think what I described in this issue is really actionable right now. It would require, as @brittandeyoung noted, some way to preserve at least some additional information between the initial plan and the final plan, which would require a change to the provider protocol and thus to Terraform Core.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement tf-devex-triage Terraform DevEx project tracking
Projects
None yet
Development

No branches or pull requests

8 participants