Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: volume automount on attachment doesn't work as expected on server recreation #473

Open
Handaleh opened this issue Oct 17, 2021 · 9 comments

Comments

@Handaleh
Copy link

Handaleh commented Oct 17, 2021

What happened?

Auto mount doesn't work when a hcloud_volume_attachment is recreated to attach a currently existing volume to a newly built(recreated) instance(hcloud_server).

My setup is pretty simple:

data "template_file" "user_data" {
  template = file("${path.module}/scripts/user_data.yaml.tpl")
}

resource "hcloud_server" "instances" {
  count       = local.instances
  name        = "test-instance-${count.index}"
  image       = local.os_type
  server_type = local.server_type
  location    = local.location

  ssh_keys = [ var.ssh_key ]
  user_data = data.template_file.user_data.rendered
}

resource "hcloud_volume" "storages" {
  count             = local.instances
  name              = "test-vol-${count.index}"
  size              = local.disk_size
  location          = local.location
  format            = "xfs"
  delete_protection = true
}

resource "hcloud_volume_attachment" "storages_attachments" {
  count     = local.instances
  volume_id = hcloud_volume.storages[count.index].id
  server_id = hcloud_server.instances[count.index].id
  automount = true
}

The first time it's applied, all works as expected:

  • ✔️ volume is attached and mounted
  • ✔️ my script(user-data.yaml) is also executed successfully(installs and configures few tools on the server)

Now I apply few changes in the script(something like echo "re-run!", now:

  • ✔️ The old instance/server and the volume attachment get destroyed
  • ✔️ The new instance and the volume attachment are created
  • 🛑 The volume is attached but not mounted!
@Handaleh Handaleh added the bug label Oct 17, 2021
@fhofherr fhofherr self-assigned this Nov 16, 2021
@fhofherr
Copy link
Contributor

Hi @Handaleh,

I was able to reproduce the issue. According to my tests the problem appears as soon as your user-data contains a runcmd directive. The reason is not the terraform-provider but the way our backend handles automounting. Internally we use a runcmd in cloud init vendor data to trigger the automounting. This is not ideal and we are aware of that. But we cannot give a timeline when we will be able to change this or if we are able to change this as all.

As a workaround can you please try including the following in the runcmd section of your userdata?

udevadm trigger -c add -s block -p ID_VENDOR=HC --verbose -p ID_MODEL=Volume

This is the command we would execute if it would not be overwritten.

@pschirch
Copy link

pschirch commented Sep 15, 2022

@Handaleh @fhofherr

we also stumbled about this issue for the hcloud_volume resource today. I can confirm that the mentioned workaround works too. It would be great if this is no longer needed.

Additionally their should be implemented an argument mount_point to configure the volumes moint point which can be used by other resources. Currently we create a configureable symbolic link on the assumption that the current format /mnt/HC_Volume_<id> wouldn't be changed.

resource "hcloud_server" "node1" {
  name        = "node1"
  image       = "ubuntu-22.04"
  server_type = "cx11"
}

resource "hcloud_volume" "important-data" {
  name        = "important-data"
  size        = 50
  mount_point = "/mnt/my-volume-mount-point"
  server_id   = hcloud_server.node1.id
  automount   = true
  format      = "ext4"
}

@Radiergummi
Copy link

Radiergummi commented Jul 7, 2023

This could probably be averted by specifying a merge_type property merging strategy in the user data file:

merge_type: "list(append)+dict(recurse_list)+str()"

This will cause runcmd directives to be appended, rather than overwritten. This is a rather obscure feature of cloud-init, and curiously, I've just opened a PR to improve the documentation on it: cloudinit.readthedocs.io/en/latest/reference/merging.html

Edit: Seconding the desire for a mount_point argument. Having to figure this out in scripts isn't ideal.

@fhofherr fhofherr removed their assignment Jul 13, 2023
@lremes
Copy link

lremes commented Aug 31, 2023

This also happen also when creating a server with user_data defined. Volume is created and attached, but the mounting does not take place.

This comment was marked as off-topic.

@github-actions github-actions bot added the stale label Nov 29, 2023
@apricote apricote added pinned and removed stale labels Dec 4, 2023
@wirepatch
Copy link

mount_point = "/mnt/my-volume-mount-point"

My two cents for this one! Or something else mitigating current scripts based on implementation details being subject to change.

@wirepatch
Copy link

This could probably be averted by specifying a merge_type property merging strategy in the user data file:
...

Not working for me while above udevadm trigger ... workaround does

@christianromeni
Copy link

I'm guessing this is still an open issue?

@wirepatch
Copy link

I'm guessing this is still an open issue?

Yepp!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants