Replies: 1 comment 7 replies
-
You scaled down the
What error do you get with this command? It fails because the operator is not running?
We need to review this doc again, I am also confused.
If you only want to remove the host and all the OSDs on that host, I would suggest the steps:
If there is any error with the purge step 3, you can instead use the section to run the purge OSD job. |
Beta Was this translation helpful? Give feedback.
-
This documentation is unclear to me.
First you scale down the rook-ceph manager, then you say this;
The steps below are first for a PVC based cluster, so I ignore those.
The next step below is "Confirm the OSD is down". And several of those steps are impossible to execute with the rook-ceph manager not running, for example
kubectl rook-ceph rook purge-osd
.So what do you mean by "steps below (1)" ? And what does "2.a" mean?
I have no filters, and useAllDevices:True so I've drained and cordoned the host I want to remove, and I've put the matching osd into out and down mode. Now is it safe to shutdown this host and remove it along with the OSD disk? Ceph status is still showing the same capacity so it's very unclear if the osd is actually ready to be removed.
Beta Was this translation helpful? Give feedback.
All reactions