Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Non-empty plans when migrating existing optional+computed nested Set blocks to Terraform Plugin Framework #884

Open
maastha opened this issue Nov 30, 2023 · 0 comments
Labels
bug Something isn't working

Comments

@maastha
Copy link

maastha commented Nov 30, 2023

Module version

1.4.2

We are currently migrating existing Plugin-SDK based resources to Plugin Framework (Plugin protocol v6). In below resource, we have some optional+computed set block attributes where certain nested attributes if not configured/partially configured by the user in the config, the API/provider will still return those attributes. That response is currently persisted in the state.

In order to migrate these blocks to the Plugin Framework, we have tried using schema.SetNestedBlock but the returned plan after upgrade to the Framework-migrated resource is always non-empty. This will be a breaking change for our users.

What is the recommended non-breaking way to migrate collection attributes such as these?

Relevant provider source code

Terraform Plugin SDK based schema for "replication_specs" block:

"replication_specs": {
				Type:     schema.TypeSet,
				Optional: true,
				Computed: true,
				Elem: &schema.Resource{
					Schema: map[string]*schema.Schema{
						"id": {
							Type:     schema.TypeString,
							Optional: true,
							Computed: true,
						},
						"num_shards": {
							Type:     schema.TypeInt,
							Required: true,
						},
						"regions_config": {
							Type:     schema.TypeSet,
							Optional: true,
							Computed: true,
							Elem: &schema.Resource{
								Schema: map[string]*schema.Schema{
									"region_name": {
										Type:     schema.TypeString,
										Required: true,
									},
									"electable_nodes": {
										Type:     schema.TypeInt,
										Optional: true,
										Computed: true,
									},
									"priority": {
										Type:     schema.TypeInt,
										Optional: true,
										Computed: true,
									},
									"read_only_nodes": {
										Type:     schema.TypeInt,
										Optional: true,
										Default:  0,
									},
									"analytics_nodes": {
										Type:     schema.TypeInt,
										Optional: true,
										Default:  0,
									},
								},
							},
						},
						"zone_name": {
							Type:     schema.TypeString,
							Optional: true,
							Default:  "ZoneName managed by Terraform",
						},
					},
				},
				Set: func(v any) int {
					var buf bytes.Buffer
					m := v.(map[string]any)
					buf.WriteString(fmt.Sprintf("%d", m["num_shards"].(int)))
					buf.WriteString(m["zone_name"].(string))
					buf.WriteString(fmt.Sprintf("%+v", m["regions_config"].(*schema.Set)))
					return advancedcluster.HashCodeString(buf.String())
				},
			},

Terraform Plugin Framework migrated schema:

func clusterRSReplicationSpecsSchemaBlock() schema.SetNestedBlock {
	return schema.SetNestedBlock{
		NestedObject: schema.NestedBlockObject{
			Attributes: map[string]schema.Attribute{
				"id": schema.StringAttribute{
					Optional: true,
					Computed: true,
				},
				"num_shards": schema.Int64Attribute{
					Required: true,
				},
				"zone_name": schema.StringAttribute{
					Optional: true,
					Computed: true,
					Default:  stringdefault.StaticString("ZoneName managed by Terraform"),
				},
			},
			Blocks: map[string]schema.Block{
				"regions_config": schema.SetNestedBlock{
					NestedObject: schema.NestedBlockObject{
						Attributes: map[string]schema.Attribute{
							"analytics_nodes": schema.Int64Attribute{
								Optional: true,
								Computed: true,
								Default:  int64default.StaticInt64(0),
							},
							"electable_nodes": schema.Int64Attribute{
								Optional: true,
								Computed: true,
							},
							"priority": schema.Int64Attribute{
								Optional: true,
								Computed: true,
							},
							"read_only_nodes": schema.Int64Attribute{
								Optional: true,
								Computed: true,
								Default:  int64default.StaticInt64(0),
							},
							"region_name": schema.StringAttribute{
								Required: true,
							},
						},
					},
				},
			},
		},
	}
}

Terraform Configuration Files

Use-case 1 (no blocks configured):

main.tf:
resource "mongodbatlas_cluster" "cluster-no-blocks" {
  project_id                                      = mongodbatlas_project.project-tf.id
  provider_name                                   = "AWS"
  name                                            = "tfCluster1"
  backing_provider_name                           = "AWS"
  provider_region_name                            = "US_EAST_1"
  provider_instance_size_name                     = "M10"
  auto_scaling_compute_enabled                    = false
  auto_scaling_compute_scale_down_enabled         = false
  provider_auto_scaling_compute_min_instance_size = "M10"
  provider_auto_scaling_compute_max_instance_size = "M20"
}

terraform.tfstate (including only concerned blocks here due to large resource config):
"replication_specs": [
              {
                "id": "....",
                "num_shards": 1,
                "regions_config": [
                  {
                    "analytics_nodes": 0,
                    "electable_nodes": 3,
                    "priority": 7,
                    "read_only_nodes": 0,
                    "region_name": "US_EAST_1"
                  }
                ],
                "zone_name": "Zone 1"
              }
            ],

Use-case 2 (all blocks configured):

main.tf:
resource "mongodbatlas_cluster" "cluster-multi-region-all-blocks" {
  project_id   = mongodbatlas_project.project-tf.id
  name         = "cluster-test-multi-region"
  num_shards   = 1
  cloud_backup = true
  cluster_type = "REPLICASET"

  provider_name               = "AWS"
  provider_instance_size_name = "M10"

  advanced_configuration {        # block can be partially configured by user    
    minimum_enabled_tls_protocol = "TLS1_2"
    default_read_concern         = "available"
  }

  bi_connector_config {
    enabled         = false
    read_preference = "secondary"
  }

  replication_specs {
    num_shards = 1
    regions_config {
      region_name     = "US_EAST_1"
      electable_nodes = 3
      priority        = 7
      read_only_nodes = 0
    }
    regions_config {
      region_name     = "US_EAST_2"
      electable_nodes = 2
      priority        = 6
      read_only_nodes = 0
    }
    regions_config {
      region_name     = "US_WEST_1"
      electable_nodes = 2
      priority        = 5
      read_only_nodes = 2
    }
  }
}

terraform.tfstate (including only concerned blocks here due to large resource config):
"replication_specs": [
              {
                "id": "....",
                "num_shards": 1,
                "regions_config": [
                  {
                    "analytics_nodes": 0,
                    "electable_nodes": 2,
                    "priority": 5,
                    "read_only_nodes": 2,
                    "region_name": "US_WEST_1"
                  },
                  {
                    "analytics_nodes": 0,
                    "electable_nodes": 2,
                    "priority": 6,
                    "read_only_nodes": 0,
                    "region_name": "US_EAST_2"
                  },
                  {
                    "analytics_nodes": 0,
                    "electable_nodes": 3,
                    "priority": 7,
                    "read_only_nodes": 0,
                    "region_name": "US_EAST_1"
                  }
                ],
                "zone_name": "ZoneName managed by Terraform"
              }
            ],

Debug Output

Expected Behavior

After user upgrades to new provider version with the framework-migrated resource, user should not see any planned changes for optional+computed lists/set blocks when running terraform plan and not receive errors when running terraform apply.

Actual Behavior

On running terraform plan below plan was produced by Terraform:
Use-case 1 (no blocks configured):

 ~ update in-place

Terraform will perform the following actions:

  # mongodbatlas_cluster.cluster-no-blocks will be updated in-place
  ~ resource "mongodbatlas_cluster" "cluster-no-blocks" {
        id                                      = "Y2x1c3Rlcl9pZA==:NjU2ODhiMmYzZDI3MWYzZjg2MGI0NjQw-Y2x1c3Rlcl9uYW1l:dGZDbHVzdGVyMQ==-cHJvamVjdF9pZA==:NjRlY2MxNTkyNzVmMjM1OWY0Y2FlODEw-cHJvdmlkZXJfbmFtZQ==:QVdT"
        name                                    = "tfCluster1"
      ~ num_shards                              = 1 -> (known after apply)
        # (34 unchanged attributes hidden)

      - replication_specs {
          - id         = "......." -> null
          - num_shards = 1 -> null
          - zone_name  = "Zone 1" -> null

          - regions_config {
              - analytics_nodes = 0 -> null
              - electable_nodes = 3 -> null
              - priority        = 7 -> null
              - read_only_nodes = 0 -> null
              - region_name     = "US_EAST_1" -> null
            }
        }
    }

Use-case 2 (all blocks configured):

 ~ update in-place

Terraform will perform the following actions:

  # mongodbatlas_cluster.cluster-multi-region-all-blocks will be updated in-place
  ~ resource "mongodbatlas_cluster" "cluster-multi-region-all-blocks" {
        id                                      = "Y2x1c3Rlcl9pZA==:NjU2ODhlZTgzZDI3MWYzZjg2MGI0ZTY3-Y2x1c3Rlcl9uYW1l:dGYtbXVsdGktcmVnaW9uLWFsbC1ibG9ja3M=-cHJvamVjdF9pZA==:NjRlY2MxNTkyNzVmMjM1OWY0Y2FlODEw-cHJvdmlkZXJfbmFtZQ==:QVdT"
        name                                    = "tf-multi-region-all-blocks"
        # (32 unchanged attributes hidden)

      - replication_specs {
          - id         = "......." -> null
          - num_shards = 1 -> null
          - zone_name  = "ZoneName managed by Terraform" -> null

          - regions_config {
              - analytics_nodes = 0 -> null
              - electable_nodes = 2 -> null
              - priority        = 5 -> null
              - read_only_nodes = 2 -> null
              - region_name     = "US_WEST_1" -> null
            }
          - regions_config {
              - analytics_nodes = 0 -> null
              - electable_nodes = 2 -> null
              - priority        = 6 -> null
              - read_only_nodes = 0 -> null
              - region_name     = "US_EAST_2" -> null
            }
          - regions_config {
              - analytics_nodes = 0 -> null
              - electable_nodes = 3 -> null
              - priority        = 7 -> null
              - read_only_nodes = 0 -> null
              - region_name     = "US_EAST_1" -> null
            }
        }
      + replication_specs {
          + num_shards = 1
          + zone_name  = "ZoneName managed by Terraform"

          + regions_config {
              + analytics_nodes = 0
              + electable_nodes = 2
              + priority        = 5
              + read_only_nodes = 2
              + region_name     = "US_WEST_1"
            }
          + regions_config {
              + analytics_nodes = 0
              + electable_nodes = 2
              + priority        = 6
              + read_only_nodes = 0
              + region_name     = "US_EAST_2"
            }
          + regions_config {
              + analytics_nodes = 0
              + electable_nodes = 3
              + priority        = 7
              + read_only_nodes = 0
              + region_name     = "US_EAST_1"
            }
        }

Steps to Reproduce

  1. run terraform init for provider v.X (contains-Plugin-SDK-based-resource-implementation) for above resource
  2. run terraform plan
  3. run terraform apply
  4. run terraform plan again -> plan returns No changes.
  5. update provider version to v.Y (contains-Plugin-Framework-based-resource-implementation) for above resource, run terraform init --upgrade
  6. run terraform plan -> Plan is not empty

References

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant