Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Support VOLUME_MOUNT_GROUP Capability #377

Open
s4ke opened this issue Feb 7, 2023 · 12 comments
Open

feat: Support VOLUME_MOUNT_GROUP Capability #377

s4ke opened this issue Feb 7, 2023 · 12 comments
Labels
enhancement New feature or request pinned

Comments

@s4ke
Copy link
Contributor

s4ke commented Feb 7, 2023

Something that would be useful for us, is something that we already PR'ed to the existing Docker volume plugin by @costela (see https://github.com/costela/docker-volume-hetzner) : support for automatic chowning to uid:gid after volume creation so that we can run containers with users other than root.

I am not very familiar with the CSI spec yet, but I was wondering if this exists already as a feature in csi-driver or whether this would require some extra work.

@apricote
Copy link
Member

apricote commented Feb 7, 2023

Setting the group ownership is part of the csi-spec through the capability VOLUME_MOUNT_GROUP.

We do not yet support this capability.

@apricote apricote added the enhancement New feature or request label Feb 7, 2023
@s4ke
Copy link
Contributor Author

s4ke commented Feb 7, 2023

Ah, interesting. If we (as in everyone interested in this project) were to implement this after we manage to get the driver working on swarm, when in the life-cycle would we do this the best?

@apricote
Copy link
Member

apricote commented Feb 7, 2023

What do you mean by life-cycle? This is unrelated to Docker Swarm support, and can be implemented in parallel. I do not know whether the docker swarm CSI implementation supports this capability though.

@s4ke
Copy link
Contributor Author

s4ke commented Feb 7, 2023

As I said, still a bit new to the CSI spec. Sorry, I rephrased the above comment which makes the statement now a bit confusing.

Essentially I am interested where in the code and therefore when in the process of creating a new volume such a thing would have to be added.

@apricote
Copy link
Member

apricote commented Feb 7, 2023

No problem ;)

The csi-spec describes this, you can search for VOLUME_MOUNT_GROUP to find the docs for it. Another source that describes this is the kubernetes-csi book, and while it contains a lot of kubernetes specific details, it is also a good secondary source for the csi-spec: https://kubernetes-csi.github.io/docs/support-fsgroup.html#overview-1

To summarize the spec, the field is passed in the calls CreateVolume, NodeStageVolume (not implemented by us) and NodePublishVolume.

As we do the actual mount in NodePublishVolume, this would be the right place to implement the new functionality:

func (s *NodeService) NodePublishVolume(ctx context.Context, req *proto.NodePublishVolumeRequest) (*proto.NodePublishVolumeResponse, error) {
if req.VolumeId == "" {
return nil, status.Error(codes.InvalidArgument, "missing volume id")
}
if req.TargetPath == "" {
return nil, status.Error(codes.InvalidArgument, "missing target path")
}
devicePath := req.GetPublishContext()["devicePath"]
if devicePath == "" {
return nil, status.Error(codes.InvalidArgument, "missing device path")
}
var opts volumes.MountOpts
switch {
case req.VolumeCapability.GetBlock() != nil:
opts = volumes.MountOpts{
BlockVolume: true,
EncryptionPassphrase: req.Secrets[encryptionPassphraseKey],
}
case req.VolumeCapability.GetMount() != nil:
mount := req.VolumeCapability.GetMount()
opts = volumes.MountOpts{
FSType: mount.FsType,
Readonly: req.Readonly,
Additional: mount.MountFlags,
EncryptionPassphrase: req.Secrets[encryptionPassphraseKey],
}
default:
return nil, status.Error(codes.InvalidArgument, "publish volume: unsupported volume capability")
}
if err := s.volumeMountService.Publish(req.TargetPath, devicePath, opts); err != nil {
return nil, status.Error(codes.Internal, fmt.Sprintf("failed to publish volume: %s", err))
}
return &proto.NodePublishVolumeResponse{}, nil
}

which then calls to LinuxMountService.Publish (line 84):

func (s *LinuxMountService) Publish(targetPath string, devicePath string, opts MountOpts) error {

Additionally, you would need to add the capability to the list of supported capabilities here:

func (s *IdentityService) GetPluginCapabilities(context.Context, *proto.GetPluginCapabilitiesRequest) (*proto.GetPluginCapabilitiesResponse, error) {

@s4ke
Copy link
Contributor Author

s4ke commented Feb 7, 2023

So I guess this could work with a simple solution similar to what we did in the docker volume plugin then?

https://github.com/costela/docker-volume-hetzner/blob/master/driver.go#L102

@apricote
Copy link
Member

apricote commented Feb 7, 2023

I think so, but we can probably skip the additional mount/unmount and just run chown after mounting it.

@apricote apricote changed the title uid:gid in created volume feat: Support VOLUME_MOUNT_GROUP Capability Feb 7, 2023
@s4ke
Copy link
Contributor Author

s4ke commented Feb 7, 2023

Just as a sidenote - since we are Swarm users obviously, this is not something that I can work on until the Swarm plugin works. So if anyone wants to take over, go for it!

@github-actions
Copy link

github-actions bot commented May 9, 2023

This issue has been marked as stale because it has not had recent activity. The bot will close the issue if no further action occurs.

@github-actions github-actions bot added the Stale label May 9, 2023
@github-actions github-actions bot closed this as completed Jun 9, 2023
@apricote apricote removed the Stale label Jun 12, 2023
@apricote apricote reopened this Jun 12, 2023
@schmidp
Copy link

schmidp commented Jul 29, 2023

I am currently looking at this but I am not a go developer so some of my questions might be stupid:

  1. you mention adding the capability to GetPluginCapabilities. Too me it looks like it should be added to: NodeGetCapabilities instead:
{
	Type: &proto.NodeServiceCapability_Rpc{
		Rpc: &proto.NodeServiceCapability_RPC{
			Type: proto.NodeServiceCapability_RPC_VOLUME_MOUNT_GROUP,
		},
	},
},
  1. A plan of action could be:

    1. Get the volumeMountGroup in NodePublishVolume:
    2. Add it to a new field in MountOpts
    3. Check if the gid is already in the mount flags and handle that case
    4. Check what fsGroupChangePolicy says and in case set ownership recursively: https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/aaf6117e1c1db319610f9e07578602dd7bb1d201/pkg/azurefile/nodeserver.go#L366
    5. in mount.go set the correct gid
    6. .... things I am misisng
    7. it works :)

@apricote
Copy link
Member

Hey @schmidp,

you mention adding the capability to GetPluginCapabilities. Too me it looks like it should be added to: NodeGetCapabilities instead:

You are correct, it needs to be in the NodeGetCapabilities response.

A plan of action could be:

Sounds good, we should also add some unit/integrations for the helper functions. For e2e tests we can rely on the kubernetes test suite which already has tests around this. Once we add the capability, these tests should automatically use the new functionality instead of the kubelet-native alternative.

Copy link

This issue has been marked as stale because it has not had recent activity. The bot will close the issue if no further action occurs.

@github-actions github-actions bot added the Stale label Nov 13, 2023
@apricote apricote added pinned and removed Stale labels Nov 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request pinned
Projects
None yet
Development

No branches or pull requests

3 participants