Replies: 2 comments
-
|
The easiest way is to use a tool like VolSync. Create a new volume with a StorageClass that references the new Ceph cluster, and have VolSync copy the contents for you. It is not only required to copy the data of the volumes themselves, Ceph-CSI stores some metadata in RADOS directly (the 'journal') too. Without the contents of the journal, Ceph-CSI will not be able to manage the volumes anymore. |
Beta Was this translation helpful? Give feedback.
-
|
Hi, Ok how can I migrate metadata in RADOS (journal) too? What should I do to not change storageClass and not create new storageClass? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
We are in the process of migrating our Kubernetes workloads from an existing Ceph cluster ("origin") to a new Ceph cluster ("remote"). Our goal is to transition the CephFS-based Persistent Volumes (PVs) with minimal changes, primarily by updating the CSI configuration in Kubernetes.
Migration Approach:
Utilize a Ceph client that has access to both the origin and remote clusters.
Create a CephFS volume in the origin cluster using the CSI driver connected to the origin monitors.
Terminate the associated pod to unmount the volume.
Employ rsync with the -avpPX options to copy the volume data from the origin to the remote cluster.
Update the CSI configuration in Kubernetes to point to the remote cluster's monitors.
Deploy the pod, expecting it to mount the volume from the remote cluster
After performing the above steps, when the pod is deployed with the updated CSI configuration pointing to the remote cluster, it fails to start. The error observed is related to extended attributes (xattr), specifically a getxattr error.
This suggests that the metadata associated with the files, such as inodes and other extended attributes, are not being preserved during the rsync operation.
Actions Taken:
CephFS Snapshot Mirroring: We considered using CephFS snapshot mirroring to replicate the data between clusters. However, this approach requires the fsid of the origin and remote clusters to be different. In our case, due to constraints with RBD mirroring, we have set the same fsid for both clusters, making this approach unfeasible.
RADOS Object Export/Import: We attempted to export and import the RADOS objects at the pool level for both data and metadata pools. While the objects appear to be present in the remote cluster, the CephFS filesystem on the remote cluster does not recognize them, indicating a mismatch or missing metadata.
Request for Guidance:
Given the constraints:
The need to maintain the same fsid across clusters due to RBD mirroring requirements.
The desire to minimize changes in Kubernetes, ideally only updating the CSI configuration.
The necessity to preserve file metadata and extended attributes during migration.
I also get this error:
What would be the recommended approach to migrate CephFS volumes between clusters under these conditions? Are there tools or methods that can facilitate this migration while ensuring data integrity and minimal disruption?
Any insights or suggestions would be greatly appreciated.
Beta Was this translation helpful? Give feedback.
All reactions