Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to marshal state to json: unsupported attribute "public_network_access" #215

Closed
MrSimonC opened this issue Sep 9, 2022 · 9 comments
Labels
bug Something isn't working upstream-terraform

Comments

@MrSimonC
Copy link

MrSimonC commented Sep 9, 2022

Hi everyone.
I'm trying to import a resource group "digitalplatform-dev" (which obviously exists) into azure-held storage account state with this command:

aztfy resource `
>> --backend-type=azurerm `
>> --backend-config=resource_group_name=digitalplatform-state-dev `
>> --backend-config=storage_account_name=digitalplatformstatedev `
>> --backend-config=container_name=development `
>> --backend-config=key=commoninfrastructure.tfstate `
>> --name=main-resource-group `
>> /subscriptions/000mysubscriptionguid000/resourceGroups/digitalplatform-dev

But I keep getting this error:

Error: generating Terraform configuration: converting from state to configurations: converting terraform state to config for resource azurerm_resource_group.res-0: show state: exit status 1
Failed to marshal state to json: unsupported attribute "public_network_access"

...and pointers? I can't see a reference to public_network_access anywhere on the resource group docs. The resource group I'm terraforming has resources inside it - but I'd like to think that's not connected.

... Update - interestingly, although the prompt shows a hard stop / error, after inspecting the remote state in the azure storage account, it has updated / wrote remote state correctly! So it's more a warning than an error - so this issue is actually less severe now I've found remote state is written to fine.

@magodo
Copy link
Collaborator

magodo commented Sep 10, 2022

@MrSimonC This is really weired.. The error occured when run terraform state show, after terraform import the resource group. Could you please manually do the import and state show by following this guide, to see whether the issue still exists?

@magodo magodo added the question Further information is requested label Sep 10, 2022
@MrSimonC
Copy link
Author

(I'm back at work Monday, will try it then)

@MrSimonC
Copy link
Author

So apologies work (as usual) takes over, but a few more things:

  1. I'm using Powershell not git bash as aztfy seems to not like forward slashes at the start of the resource id
  2. Issue occurs (today) when importing a single resource to remote azure state
  3. Again importing a single resource works flawlessly (directly again remote state held in Azure storage) but results in that reported error message
  4. No error is seen when importing locally (to an empty directory), only remove Azure storage
  5. Where that remote Azure storage has other entries - and we're adding another resource into that existing state file, the error always appears.
  6. terraform state show azurerm_kubernetes_cluster_node_pool.cluster_node_pool (as an example) shows what you expect e.g.
resource "azurerm_kubernetes_cluster_node_pool" "cluster_node_pool" {
    enable_auto_scaling    = true
    enable_host_encryption = false
    enable_node_public_ip  = false
    fips_enabled           = false
    id                     = "/subscriptions/0000mySubscription000/resourceGroups/digitalplatform-dev/providers/Microsoft.ContainerService/managedClusters/digitalplatform-cluster/agentPools/agentpool"
    kubelet_disk_type      = "OS"
    kubernetes_cluster_id  = "/subscriptions/0000mySubscription000/resourceGroups/digitalplatform-dev/providers/Microsoft.ContainerService/managedClusters/digitalplatform-cluster"
    max_count              = 6
    max_pods               = 110
    min_count              = 2
    mode                   = "System"
    name                   = "agentpool"
    node_count             = 2
    node_labels            = {}
    node_taints            = []
    os_disk_size_gb        = 128
    os_disk_type           = "Managed"
    os_sku                 = "Ubuntu"
    os_type                = "Linux"
    priority               = "Regular"
    scale_down_mode        = "Delete"
    spot_max_price         = -1
    tags                   = {}
    ultra_ssd_enabled      = false
    vm_size                = "Standard_DS2_v2"
    zones                  = [
        "1",
        "2",
        "3",
    ]

    timeouts {}
}
  1. I've found the issue comes from the contents of the current remote azure state file we're appending to using aztf resource ...

e.g today I got:
Error: generating Terraform configuration: converting from state to configurations: converting terraform state to config for resource azurerm_kubernetes_cluster_node_pool.main-resource-group: show state: exit status 1
Failed to marshal state to json: unsupported attribute "public_network_access"

... yet it import ok.
In the remote state file, the only existing entry which mentions "public_network_access" is:

{
    "version": 4,
    "terraform_version": "1.2.9",
    "serial": 21,
    "lineage": "b5d34b85-5f9d-1046-4605-f20852a1a77b",
    "outputs": {},
    "resources": [
      {
        "mode": "managed",
        "type": "azurerm_app_configuration",
        "name": "appconf",
        "provider": "provider[\"registry.terraform.io/hashicorp/azurerm\"]",
        "instances": [
          {
            "schema_version": 0,
            "attributes": {
              "endpoint": "https://myExampleEndPoint.azconfig.io",
              "id": "/subscriptions/000mySubscription000/resourceGroups/digitalplatform-dev/providers/Microsoft.AppConfiguration/configurationStores/myExampleEndPoint",
              "identity": [],
              "location": "westeurope",
              "name": "myExampleEndPoint",
              "primary_read_key": [
                {
                  "connection_string": "Endpoint=https://myExampleEndPoint.azconfig.io;Id=REDACTED;Secret=REDACTED",
                  "id": "REDACTED",
                  "secret": "REDACTED"
                }
              ],
              "primary_write_key": [
                {
                  "connection_string": "Endpoint=https://myExampleEndPoint.azconfig.io;Id=REDACTED;Secret=REDACTED",
                  "id": "REDACTED",
                  "secret": "REDACTED"
                }
              ],
              "public_network_access": "",
              "resource_group_name": "digitalplatform-dev",
              "secondary_read_key": [
                {
                  "connection_string": "Endpoint=https://myExampleEndPoint.azconfig.io;Id=REDACTED;Secret=REDACTED",
                  "id": "REDACTED",
                  "secret": "REDACTED"
                }
              ],
              "secondary_write_key": [
                {
                  "connection_string": "Endpoint=https://myExampleEndPoint.azconfig.io;Id=REDACTED;Secret=REDACTED",
                  "id": "REDACTED",
                  "secret": "REDACTED"
                }
              ],
              "sku": "free",
              "tags": {
                "environment": "development",
                "terraformed": "True"
              },
              "timeouts": null
            },
            "sensitive_attributes": [],
            "private": "REDACTED=",
            "dependencies": [
              "azurerm_resource_group.main_resource_group"
            ]
          }
        ]
      },
...

@magodo
Copy link
Collaborator

magodo commented Sep 15, 2022

@MrSimonC Much appreaciated for above detailed information, which is quite useful!

The error is from the terraform show, which actually implies that the provider under used by aztfy can't unmarshal what is stored in the remote state for azurerm_app_configuration. The reason is that the public_network_access property is introduced in v3.21.0. Each aztfy will be bound to a specific provider version (which you can see from the file provider.tf in the output directory of aztfy). I believe if you use the latest aztfy (v0.7.0), which is bound to provider v3.22.0, the issue should be resolved.

Meanwhile, the reason why current implementation of the code that converts from state to HCL will also show the state for unrelated resources is due to the terraform-exec library currently doesn't support terraform state show, which is asked in: hashicorp/terraform-exec#336

@magodo magodo added upstream-terraform bug Something isn't working and removed question Further information is requested labels Sep 28, 2022
@volver-13
Copy link

@magodo Is it possible to specify azurerm provider version for aztfy? I have my existing plan at 3.20 and aztfy runs 3.46. This causes issues during import. Seems aztfy doesn't respect provider setting at all, is it possible to somehow force it to use 3.20 version?

@magodo
Copy link
Collaborator

magodo commented Mar 9, 2023

@Brysk If you are appending to ane existing workspace where you have the terraform block defined (you can simply define it if not exist), it will use that version (i.e. v3.20.0). But that might cause issues as aztfy assumes the provider's schema is defined as the bound version.

@volver-13
Copy link

volver-13 commented Mar 9, 2023

@magodo yes, I actually have a terraform block defined in my existing workspace:

terraform {
  required_providers {

    azurerm = {
      source  = "hashicorp/azurerm"
      version = "= 3.20.0"
    }
  }

  required_version = ">= 1.1.0"
}

For some reason it always uses the latest which is 3.46 and I'm not able to find a solution here

EDIT:
I'm using v0.10 since v.0.11 is not yet avaialble on homebrew tap

aztfy --version
aztfy version v0.10.0(c11238f)

@magodo
Copy link
Collaborator

magodo commented Mar 10, 2023

@Brysk You are right, aztfy will generate a couple of temp directories to import in parallel, where each such directory will create a terraform config with the bound provider version. Whilst there is a escape hatch for this - --dev-provider - but need special setup. I'll create a new issue (#375) for tracking this feature request.

@magodo
Copy link
Collaborator

magodo commented May 24, 2023

Fixed by #376

@magodo magodo closed this as completed May 24, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working upstream-terraform
Projects
None yet
Development

No branches or pull requests

3 participants