Skip to content

Latest commit

 

History

History
122 lines (86 loc) · 3.75 KB

pve-storage-cephfs.adoc

File metadata and controls

122 lines (86 loc) · 3.75 KB

Ceph Filesystem (CephFS)

Storage pool type: cephfs

CephFS implements a POSIX-compliant filesystem using a Ceph storage cluster to store its data. As CephFS builds on Ceph it shares most of its properties, this includes redundancy, scalability, self healing and high availability.

Tip
{pve} can manage ceph setups, which makes configuring a CephFS storage easier. As recent hardware has plenty of CPU power and RAM, running storage services and VMs on same node is possible without a big performance impact.

To use the CephFS storage plugin you need update the debian stock Ceph client. Add our Ceph repository Ceph repository. Once added, run an apt update and apt dist-upgrade cycle to get the newest packages.

You need to make sure that there is no other Ceph repository configured, otherwise the installation will fail or there will be mixed package versions on the node, leading to unexpected behavior.

Configuration

This backend supports the common storage properties nodes, disable, content, and the following cephfs specific properties:

monhost

List of monitor daemon addresses. Optional, only needed if Ceph is not running on the PVE cluster.

path

The local mount point. Optional, defaults to /mnt/pve/<STORAGE_ID>/.

username

Ceph user id. Optional, only needed if Ceph is not running on the PVE cluster where it defaults to admin.

subdir

CephFS subdirectory to mount. Optional, defaults to /.

fuse

Access CephFS through FUSE, instead of the kernel client. Optional, defaults to 0.

Configuration Example for a external Ceph cluster (/etc/pve/storage.cfg)
cephfs: cephfs-external
        monhost 10.1.1.20 10.1.1.21 10.1.1.22
        path /mnt/pve/cephfs-external
        content backup
        username admin
Note
Don’t forget to setup the client secret key file if cephx was not turned off.

Authentication

If you use the, by-default enabled, cephx authentication, you need to copy the secret from your external Ceph cluster to a Proxmox VE host.

Create the directory /etc/pve/priv/ceph with

mkdir /etc/pve/priv/ceph

Then copy the secret

scp cephfs.secret <proxmox>:/etc/pve/priv/ceph/<STORAGE_ID>.secret

The secret must be named to match your <STORAGE_ID>. Copying the secret generally requires root privileges. The file must only contain the secret key itself, opposed to the rbd backend which also contains a [client.userid] section.

A secret can be received from the ceph cluster (as ceph admin) by issuing the following command. Replace the userid with the actual client ID configured to access the cluster. For further ceph user management see the Ceph docs [1].

ceph auth get-key client.userid > cephfs.secret

If Ceph is installed locally on the PVE cluster, i.e., setup with pveceph, this is done automatically.

Storage Features

The cephfs backend is a POSIX-compliant filesystem on top of a Ceph cluster.

Table 1. Storage features for backend cephfs
Content types Image formats Shared Snapshots Clones

vztmpl iso backup snippets

none

yes

yes[1]

no

[1] Snapshots, while no known bugs, cannot be guaranteed to be stable yet, as they lack testing.