万字长书之CEPH-quincy 安装及K8S集成

一、介绍

Ceph是一个开源的分布式存储系统,旨在提供高性能、高可靠性和可扩展性的存储解决方案。它设计用于在普通硬件上构建存储集群,可以通过将多个存储节点组合在一起来创建一个统一的、分布式的存储池。由于其出色的容灾性能被广泛使用,主打一个抗造。

主要特点包括:

  1. 分布式架构:Ceph使用RADOS(Reliable Autonomic Distributed Object Store)作为其基本存储系统,数据被分布到集群中的多个节点上,实现数据的冗余和可靠性。
  2. 对象存储:Ceph采用对象存储的方式来管理数据,每个对象都有一个唯一的标识符和数据。对象可以存储在不同的节点上,这样可以实现负载均衡和高可用性。
  3. 块存储:Ceph还提供块存储接口,使其能够兼容与传统的块设备类似的操作,这使得Ceph可以作为虚拟机和云计算环境的后端存储。
  4. 文件系统:Ceph提供了一个名为CephFS的分布式文件系统,它允许用户通过标准文件系统接口(例如POSIX)来访问数据。
  5. 可扩展性:Ceph的存储集群可以根据需要进行扩展,添加更多的节点来提高存储容量和性能。



二、服务器设置

#主机名
hostnamectl set-hostname pve-svr-ceph-01
hostnamectl set-hostname pve-svr-ceph-02
hostnamectl set-hostname pve-svr-ceph-03

2.1 配置主机名

cat >> /etc/hosts << EOF
172.16.213.137 pve-svr-ceph-01
172.16.213.55 pve-svr-ceph-02
172.16.213.141 pve-svr-ceph-03
EOF

2.2 安装依赖、python3

yum install wget curl gcc gcc-c++ openssl openssl-devel zlib zlib-devel epel-release -y
wget https://www.python.org/ftp/python/3.7.17/Python-3.7.17.tgz -P /usr/local/src && cd /usr/local/src && tar -zvxf Python-3.7.17.tgz && cd Python-3.7.17 && ./configure && make && make install
#python 创建软连(三个节点都要执行)
ln -s /usr/local/bin/python3 /usr/bin/python3

2.3监控安装

curl -sSL http://172.16.213.23/zabbix/zabbix_install.sh |bash -

3. 下载cephadm

3.1 key导入

rpm --import 'https://download.ceph.com/keys/release.asc'
curl --silent --remote-name --location https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm
chmod a+x ./cephadm

4.docker安装

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum install docker-ce-23.0.6-1.el7 -y
yum makecache fast && systemctl enable docker &&systemctl start docker
mkdir /etc/docker && mkdir -pv /data/docker-root
echo '{
"oom-score-adjust": -1000,
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "3"
},
"max-concurrent-downloads": 10,
"max-concurrent-uploads": 10,
"bip": "172.20.1.0/16",
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"registry-mirrors": ["https://o4uba187.mirror.aliyuncs.com"],
"data-root": "/data/docker-root",
"exec-opts": ["native.cgroupdriver=systemd"]
}'| tee /etc/docker/daemon.json
systemctl daemon-reload && systemctl start docker
cephadm bootstrap --mon-ip 172.16.213.137

4.cephadm 安装集群

[root@pve-svr-ceph-01 data]# ./cephadm bootstrap --mon-ip 172.16.213.137
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
docker (/usr/bin/docker) is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: 9e977242-2ed3-11ee-9e35-b02628aff3e6
Verifying IP 172.16.213.137 port 3300 ...
Verifying IP 172.16.213.137 port 6789 ...
Mon IP `172.16.213.137` is in CIDR network `172.16.213.0/24`
Mon IP `172.16.213.137` is in CIDR network `172.16.213.0/24`
Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image quay.io/ceph/ceph:v17...
Ceph version: ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
firewalld ready
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network to 172.16.213.0/24
Wrote config to /etc/ceph/ceph.conf
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Creating mgr...
Verifying port 9283 ...
firewalld ready
firewalld ready
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/15)...
mgr not available, waiting (2/15)...
mgr not available, waiting (3/15)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for mgr epoch 5...
mgr epoch 5 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to /etc/ceph/ceph.pub
Adding key to root@localhost authorized_keys...
Adding host pve-svr-ceph-01...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
Enabling the dashboard module...
Waiting for the mgr to restart...
Waiting for mgr epoch 9...
mgr epoch 9 is available
Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
firewalld ready
Ceph Dashboard is now available at:
URL: https://pve-svr-ceph-01:8443/
User: admin
Password: or2qgjs2o7
Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/9e977242-2ed3-11ee-9e35-b02628aff3e6/config directory
Enabling autotune for osd_memory_target
You can access the Ceph CLI as following in case of multi-cluster or non-default config:
sudo ./cephadm shell --fsid 9e977242-2ed3-11ee-9e35-b02628aff3e6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Or, if you are only running a single cluster on this host:
sudo ./cephadm shell
Please consider enabling telemetry to help improve Ceph:
ceph telemetry on
For more information see:
https://docs.ceph.com/docs/master/mgr/telemetry/
Bootstrap complete.

安装完成后可以访问dashboard

URL: https://pve-svr-ceph-01:8443/

User: admin

Password: or2qgjs2o7

5.1创建osd 自动添加所有满足条件的OSD


#导出ceph 公钥
ceph cephadm get-pub-key > ~/ceph.pub
#复制到节点
ssh-copy-id -f -i ~/ceph.pub root@pve-svr-ceph-02
ssh-copy-id -f -i ~/ceph.pub root@pve-svr-ceph-03
#添加osd
ceph orch host add pve-svr-ceph-02
ceph orch host add pve-svr-ceph-03

5.2 自动扫描全部可添加的硬盘作为osd

ceph orch apply osd --all-available-devices
#也可以手工指定的方式添加OSD
ceph orch daemon add osd pve-svr-ceph-01:/dev/sdb
ceph orch daemon add osd pve-svr-ceph-02:/dev/sdb
ceph orch daemon add osd pve-svr-ceph-03:/dev/sdb
#查看状态
[root@pve-svr-ceph-01 ~]# ceph orch device ls
HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS
pve-svr-ceph-01 /dev/sda hdd HGST_HUS728T8TAL5200_VAJ1WYEL 8001G Yes 80s ago
pve-svr-ceph-01 /dev/sdb hdd HGST_HUS728T8TAL5200_VAJ15R6L 8001G Yes 80s ago
pve-svr-ceph-01 /dev/sdc hdd HGST_HUS728T8TAL5200_VAJ15NZL 8001G Yes 80s ago
pve-svr-ceph-01 /dev/sdd hdd HGST_HUS728T8TAL5200_VAJ15T7L 8001G Yes 80s ago
pve-svr-ceph-01 /dev/sde hdd HGST_HUS728T8TAL5200_VAJ1YGLL 8001G Yes 80s ago
pve-svr-ceph-01 /dev/sdf hdd HGST_HUS728T8TAL5200_VAJ1N47L 8001G Yes 80s ago
pve-svr-ceph-01 /dev/sdg hdd HGST_HUS728T8TAL5200_VAJ1N3NL 8001G Yes 80s ago
pve-svr-ceph-01 /dev/sdh hdd HGST_HUS728T8TAL5200_VAJ15R5L 8001G Yes 80s ago
pve-svr-ceph-01 /dev/sdi hdd HGST_HUS728T8TAL5200_VAJ1ZXTL 8001G Yes 80s ago
pve-svr-ceph-01 /dev/sdj hdd HGST_HUS728T8TAL5200_VAJ15PWL 8001G Yes 80s ago
pve-svr-ceph-02 /dev/sda hdd HGST_HUS728T8TAL5200_VAJD42ZR 8001G Yes 79s ago
pve-svr-ceph-02 /dev/sdb hdd HGST_HUS728T8TAL5200_VAJDE8MR 8001G Yes 79s ago
pve-svr-ceph-02 /dev/sdc hdd HGST_HUS728T8TAL5200_VAJBWV3R 8001G Yes 79s ago
pve-svr-ceph-02 /dev/sdd hdd HGST_HUS728T8TAL5200_VAJBVJKR 8001G Yes 79s ago
pve-svr-ceph-02 /dev/sde hdd HGST_HUS728T8TAL5200_VAJAZLYR 8001G Yes 79s ago
pve-svr-ceph-02 /dev/sdf hdd HGST_HUS728T8TAL5200_VAJB942R 8001G Yes 79s ago
pve-svr-ceph-02 /dev/sdg hdd HGST_HUS728T8TAL5200_VAJBW1UR 8001G Yes 79s ago
pve-svr-ceph-02 /dev/sdh hdd HGST_HUS728T8TAL5200_VAJBWXUR 8001G Yes 79s ago
pve-svr-ceph-02 /dev/sdi hdd HGST_HUS728T8TAL5200_VAJ1YH4L 8001G Yes 79s ago
pve-svr-ceph-02 /dev/sdj hdd HGST_HUS728T8TAL5200_VAJ1YEKL 8001G Yes 79s ago
pve-svr-ceph-03 /dev/sda hdd TOSHIBA_MG06SCA800EY_1090A0Q0F1GF 8001G Yes 81s ago
pve-svr-ceph-03 /dev/sdb hdd TOSHIBA_MG06SCA800EY_1090A0Q1F1GF 8001G Yes 81s ago
pve-svr-ceph-03 /dev/sdc hdd TOSHIBA_MG06SCA800EY_1090A0Q8F1GF 8001G Yes 81s ago
pve-svr-ceph-03 /dev/sdd hdd TOSHIBA_MG06SCA800EY_1090A017F1GF 8001G Yes 81s ago
pve-svr-ceph-03 /dev/sde hdd TOSHIBA_MG06SCA800EY_1090A01YF1GF 8001G Yes 81s ago
pve-svr-ceph-03 /dev/sdf hdd TOSHIBA_MG06SCA800EY_1090A0PYF1GF 8001G Yes 81s ago
pve-svr-ceph-03 /dev/sdg hdd TOSHIBA_MG06SCA800EY_1090A0QSF1GF 8001G Yes 81s ago
pve-svr-ceph-03 /dev/sdh hdd TOSHIBA_MG06SCA800EY_1090A003F1GF 8001G Yes 81s ago
pve-svr-ceph-03 /dev/sdi hdd TOSHIBA_MG06SCA800EY_1090A0Q7F1GF 8001G Yes 81s ago
pve-svr-ceph-03 /dev/sdj hdd TOSHIBA_MG06SCA800EY_1090A0Q2F1GF 8001G Yes 81s ago
#开启nfs
ceph dashboard set-ganesha-clusters-rados-pool-namespace [/](图形界面也可以开启)

                                                                     


正常新服务器、新硬盘(未做raid)都可以自动创建osd,但是若当前服务器数据盘创建了分区或者分区原来创建了后期删除了,发现无法自动创建osd 原因就是 比如我这里原来是gpt硬盘,删除分区后,还存在gpt数据结构,因此需要执行下列步骤进行清除(若可以自动发现忽略5.3)

5.3 sgdisk清除分区内容

#如果是gpt硬盘,gpt 硬盘删除分区还存在 GPT 数据结构,使用 sgdisk 命令进行清除(原来服务器为haoop集群,硬盘被做了分区)。

yum install gdisk -y
sgdisk --zap-all /dev/sda
sgdisk --zap-all /dev/sdb
sgdisk --zap-all /dev/sdc
sgdisk --zap-all /dev/sdd
sgdisk --zap-all /dev/sde
sgdisk --zap-all /dev/sdf
sgdisk --zap-all /dev/sdg
sgdisk --zap-all /dev/sdh
sgdisk --zap-all /dev/sdi
sgdisk --zap-all /dev/sdj
#其他主机夜之星同样操作

6. 创建cephfs(自动创建meta、mds)

ceph fs volume create cephfs

7.ceph的使用与k8s集成

7.1 mount挂在使用

```

使用mount.ceph 挂载cephfs

#配置主机名

cat >> /etc/hosts << EOF
172.16.213.137 pve-svr-ceph-01
172.16.213.55 pve-svr-ceph-02
172.16.213.141 pve-svr-ceph-03
EOF

#client端安装ceph-common(安装mount.ceph命令)

#获取 ceph secret

Rancher 连接 ceph 集群需要 ceph secret,在 ceph 服务器中执行以下命令生成 ceph secret:

mkdir /etc/ceph
[root@pve-svr-ceph-01 data]# ceph auth get-key client.admin |base64
QVFEVFZjWms2QWxCSkJBQVdoSEdtdnAvSG5IZU1oWXFpbmpyQlE9PQ==
#写入文件
echo >> /etc/ceph/admin.secret << EOF
QVFEVFZjWms2QWxCSkJBQVdoSEdtdnAvSG5IZU1oWXFpbmpyQlE9PQ==
EOF
mkdir /cephfs_data

#挂载

mount -t ceph pve-svr-ceph-03:6789,pve-svr-ceph-03:6789,pve-svr-ceph-03:6789:/ /cephfs_data -o name=admin,secretfile=/etc/ceph/admin.secret


7.2 ceph nfs-ganesha (通过nfs创建pv/pvc/sc)

7.2.1 挂载

```

mount -t nfs -o nfsvers=4.1,proto=tcp 172.16.213.55:/hy_data /mnt/hy_k8s_data/
[root@rke-k8s-worker1 ~]# df -h|grep mnt
172.16.213.55:/hy_data 70T 0 70T 0% /mnt/hy_k8s_data
```

7.2.2创建k8s pv

cat > pv.yml <

7.2.3 创建k8s pvc

cat > pvc.yml < pvcuse.yml <

7.2.4 创建存储类

cd /data/ceph_k8s_yml
git clone https://gitee.com/unixtommy/nfs-subdir-external-provisioner.git
cd nfs-subdir-external-provisioner
#使用的默认名称空间
NS=$(kubectl config get-contexts|grep -e "^*" |awk '{print $5}')
NAMESPACE=${NS:-default}
sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml ./deploy/deployment.yaml
#RBAC授权
kubectl create -f deploy/rbac.yaml
#根据实际情况在deploy/deployment中修改镜像、名称、nfs地址和挂载
kubectl create -f deploy/deployment.yaml

7.2.5 配置存储类

#根据nfs 修改镜像和地址 deployment.yml
cat > storageclassdeploy.yml <<'EOF'
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-nfs-client
provisioner: unixtommy/ceph-nfs-storage
allowVolumeExpansion: true
parameters:
pathPattern: "${.PVC.namespace}-${.PVC.name}"
onDelete: delete

执行创建sc

[root@rke-k8s-master1 ceph_k8s_yml]#
cat > storageclassdeploy.yml <<'EOF'
> apiVersion: storage.k8s.io/v1
> kind: StorageClass
> metadata:
> name: ceph-nfs-client
> provisioner: unixtommy/ceph-nfs-storage
> allowVolumeExpansion: true
> parameters:
> pathPattern: "${.PVC.namespace}-${.PVC.name}"
> onDelete: delete
> EOF
[root@rke-k8s-master1 ceph_k8s_yml]# ll
总用量 20
drwxr-xr-x 10 root root 4096 7月 30 22:43 nfs-subdir-external-provisioner
-rw-r--r-- 1 root root 385 7月 30 22:37 pvcuse.yml
-rw-r--r-- 1 root root 271 7月 30 22:32 pvc.yml
-rw-r--r-- 1 root root 366 7月 30 22:30 pv.yml
-rw-r--r-- 1 root root 215 7月 30 23:00 storageclassdeploy.yml
[root@rke-k8s-master1 ceph_k8s_yml]# vim storageclassdeploy.yml
[root@rke-k8s-master1 ceph_k8s_yml]# kubectl create -f storageclassdeploy.yml
storageclass.storage.k8s.io/nfs-client created
[root@rke-k8s-master1 ceph_k8s_yml]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ceph-nfs-client unixtommy/ceph-nfs-storage Delete Immediate true 4s
sc-nfs1-1 driver.longhorn.io Delete Immediate true 21d
[root@rke-k8s-master1 ceph_k8s_yml]# kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ceph-nfs-client unixtommy/ceph-nfs-storage Delete Immediate true 18s
sc-nfs1-1 driver.longhorn.io Delete Immediate true 21d
#只需要在pvc.spec中执行storageClassName: ceph-nfs-client就可以了

7.2.6 标记默认存储类

#标记默认存储类
kubectl patch storageclass ceph-nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'


7.3 块设备(storageclass kubernetes)

7.3.1 创建kuberntest pool

ceph osd pool create kubernetes
rbd pool init kubernetes
#csi
#查看mon信息
[root@pve-svr-ceph-01 data]# ceph mon dump
dumped monmap epoch 3
epoch 3
fsid 9e977242-2ed3-11ee-9e35-b02628aff3e6
last_changed 2023-07-30T12:27:17.082792+0000
created 2023-07-30T12:21:40.511162+0000
min_mon_release 17 (quincy)
election_strategy: 1
0: [v2:172.16.213.137:3300/0,v1:172.16.213.137:6789/0] mon.pve-svr-ceph-01
1: [v2:172.16.213.55:3300/0,v1:172.16.213.55:6789/0] mon.pve-svr-ceph-02
2: [v2:172.16.213.141:3300/0,v1:172.16.213.141:6789/0] mon.pve-svr-ceph-03

7.3.2 创建csi-config-map.yml

cat < /data/kubernetes/csi-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:
config.json: |-
[
{
"clusterID": "9e977242-2ed3-11ee-9e35-b02628aff3e6",
"monitors": [
"172.16.213.137:6789",
"172.16.213.55:6789",
"172.16.213.141:6789"
]
}
]
metadata:
name: ceph-csi-config
EOF

7.3.3创建kubernetes独立用户

[root@pve-svr-ceph-01 kubernetes]# ceph auth get-or-create client.kubernetes mon 'profile rbd' osd 'profile rbd pool=kubernetes' mgr 'profile rbd pool=kubernetes'
[client.kubernetes]
key = AQBNZMZkDc8zFBAATBsiKBdQWLeFPtTS6y8VRA==
#kms csi-ceph 新版本有要求创建一个kms,可以内容为空但是要有
cat < /data/kubernetes/csi-kms-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:
config.json: |-
{}
metadata:
name: ceph-csi-encryption-kms-config
EOF

7.3.4 创建kms

kubectl apply -f csi-kms-config-map.yaml
#ceph-config-map.yaml定义Ceph configuration 到 ceph.conf
cat < ceph-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:
ceph.conf: |
[global]
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
# keyring is a required key and its value should be empty
keyring: |
metadata:
name: ceph-config
EOF
#执行
kubectl apply -f ceph-config-map.yaml
# 用户secret(上面创建的)
cat < csi-rbd-secret.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: csi-rbd-secret
namespace: default
stringData:
userID: kubernetes
userKey: AQBNZMZkDc8zFBAATBsiKBdQWLeFPtTS6y8VRA==
EOF
kubectl apply -f csi-rbd-secret.yaml


7.3.5 ceph csi 插件安装 要翻墙

```

kubectl apply -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-provisioner-rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-nodeplugin-rbac.yaml
wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin-provisioner.yaml
kubectl apply -f csi-rbdplugin-provisioner.yaml
wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin.yaml
kubectl apply -f csi-rbdplugin.yaml

7.3.6 使用块设备

cat < csi-rbd-sc.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
clusterID: 9e977242-2ed3-11ee-9e35-b02628aff3e6
pool: kubernetes
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: default
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: default
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: default
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard
EOF
# kubectl apply -f csi-rbd-sc.yaml

7.3.7 pvc 创建

cat < raw-block-pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: raw-block-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: 1Gi
storageClassName: csi-rbd-sc
EOF
$ kubectl apply -f raw-block-pvc.yaml
# pod中使用
cat < raw-block-pod.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: pod-with-raw-block-volume
spec:
containers:
- name: fc-container
image: fedora:26
command: ["/bin/sh", "-c"]
args: ["tail -f /dev/null"]
volumeDevices:
- name: data
devicePath: /dev/xvda
volumes:
- name: data
persistentVolumeClaim:
claimName: raw-block-pvc
EOF
$ kubectl apply -f raw-block-pod.yaml
```

# 7.ceph 卸载

没学会安装,最后你得学会卸载[狗头]

#清除集群中所有主机的 ceph 守护进程 每个节点都要执行

ceph fsid
./cephadm rm-cluster --force --zap-osds --fsid 88bf45d8-2ecb-11ee-a554-b02628aff3e6

页面更新:2024-04-29

标签:分布式   节点   集群   文件系统   分区   主机名   对象   硬盘   服务器   数据

1 2 3 4 5

上滑加载更多 ↓
Top