All Subvolumes Are Down. Going Offline Until Atleast One of Them Comes Back Up

1.Glusterfs简介

   GlusterFS是Scale-Out存储解决方案Gluster的核心,它是一个开源的分布式文件系统,具有强大的横向扩展能力,通过扩展能够支持数PB存储容量和处理数千客户端。GlusterFS借助TCP/IP或InfiniBandRDMA网络将物理分布的存储资源聚集在一起,使用单一全局命名空间来管理数据。

说起glusterfs可能比较陌生,可能大家更多的听说和使用的是NFS,GFS,HDFS之类的,这之中的NFS应该是使用最为广泛的,简单易于管理,但是NFS以及后边会说到MooseFS都会存在单点故障,为了解决这个问题一般情况下都会结合DRBD进行块儿复制。但是glusterfs就完全不用考虑这个问题了,因为它是一个完全的无中心的系统。

2.Glusterfs特点

    扩展性和高性能

    GlusterFS利用双重特性来提供几TB至数PB的高扩展存储解决方案。Scale-Out架构允许通过简单地增加资源来提高存储容量和性能,磁盘、计算和I/O资源都可以独立增加,支持10GbE和InfiniBand等高速网络互联。Gluster弹性哈希(ElasticHash)解除了GlusterFS对元数据服务器的需求,消除了单点故障和性能瓶颈,真正实现了并行化数据访问。

    高可用性

    GlusterFS可以对文件进行自动复制,如镜像或多次复制,从而确保数据总是可以访问,甚至是在硬件故障的情况下也能正常访问。自我修复功能能够把数据恢复到正确的状态,而且修复是以增量的方式在后台执行,几乎不会产生性能负载。GlusterFS没有设计自己的私有数据文件格式,而是采用操作系统中主流标准的磁盘文件系统(如EXT3、ZFS)来存储文件,因此数据可以使用各种标准工具进行复制和访问。

    弹性卷管理

    数据储存在逻辑卷中,逻辑卷可以从虚拟化的物理存储池进行独立逻辑划分而得到。存储服务器可以在线进行增加和移除,不会导致应用中断。逻辑卷可以在所有配置服务器中增长和缩减,可以在不同服务器迁移进行容量均衡,或者增加和移除系统,这些操作都可在线进行。文件系统配置更改也可以实时在线进行并应用,从而可以适应工作负载条件变化或在线性能调优。

3、术语简介

GlusterFS是一个开源的分布式文件系统。更多特性介绍附录的参考文档。

Brick:GFS中的存储单元,通过是一个受信存储池中的服务器的一个导出目录。可以通过主机名和目录名来标识,如'SERVER:EXPORT'

Client: 挂载了GFS卷的设备

Extended Attributes:xattr是一个文件系统的特性,其支持用户或程序关联文件/目录和元数据。

FUSE:Filesystem Userspace是一个可加载的内核模块,其支持非特权用户创建自己的文件系统而不需要修改内核代码。通过在用户空间运行文件系统的代码通过FUSE代码与内核进行桥接。

Geo-Replication

GFID:GFS卷中的每个文件或目录都有一个唯一的128位的数据相关联,其用于模拟inode

Namespace:每个Gluster卷都导出单个ns作为POSIX的挂载点

Node:一个拥有若干brick的设备

RDMA:远程直接内存访问,支持不通过双方的OS进行直接内存访问。

RRDNS:round robin DNS是一种通过DNS轮转返回不同的设备以进行负载均衡的方法

Self-heal:用于后台运行检测复本卷中文件和目录的不一致性并解决这些不一致。

Split-brain:脑裂

Translator:

Volfile:glusterfs进程的配置文件,通常位于/var/lib/glusterd/vols/volname

Volume:一组bricks的逻辑集合

官方网站:http://www.gluster.org/

下载地址:http://download.gluster.org/pub/gluster/glusterfs/

实验环境

操作系统: CentOS7.5

10.30.1.231    glfs1

10.30.1.232    glfs2

10.30.1.233    glfs3

10.30.1.234    glfs4

添加hosts文件实现集群主机的相互通信

管理服务器和agent端都添加hosts文件实现集群主机之间相互能够解析

#echo -e "10.30.1.231    glfs1\n10.30.1.232    glfs2\n10.30.1.233    glfs3\n10.30.1.234    glfs4" >> /etc/hosts

修改每台机器的 /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.30.1.231    glfs1

10.30.1.232    glfs2

10.30.1.233    glfs3

10.30.1.234    glfs4

关闭selinux和防火墙

setenforce 0

sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

systemctl stop iptables

systemctl stop firewalld

systemctl disable iptables

systemctl disable firewalld

1.安装glusterfs

当然你的主机联网的情况下,用下面的方法安装更便捷

yum install centos-release-gluster7 -y

yum  install -y glusterfs-server glusterfs-cli glusterfs-geo-replication

2.配置glusterfs

2.1查看版本信息

glusterfs -V

2.2启动停止glusterfs服务

systemctl start glusterfsd

systemctl start glusterd

2.3设置开机启动

systemctl enable glusterfsd

systemctl enable glusterd

2.4服务运行状态查看

systemctl status glusterd

2.5存储主机加入信任存储池

[root@glfs1 ~]# gluster peer probe glfs2

[root@glfs1 ~]# gluster peer probe glfs3

[root@glfs1 ~]# gluster peer probe glfs4

注:除了自己这台服务器,加入其他所有服务器

2.6状态查看

gluster peer status

Number of Peers: 3

Hostname: glfs2

Uuid: ec9f45ad-afd8-45a9-817f-f76da020eb8d

State: Peer in Cluster (Connected)

Hostname: glfs3

Uuid: efc1aaec-191b-4c2c-bcbf-58467d2611c1

State: Peer in Cluster (Connected)

Hostname: glfs4

Uuid: 7fc1e778-c2f7-4970-b8f6-039ab0d2792b

State: Peer in Cluster (Connected)

2.7配置前的准备工作

安装xfs支持包

yum -y install xfsprogs

fdisk -l 查看磁盘块设备,查看类似的信息

Disk /dev/sda: 107.4 GB, 107374182400 bytes

255 heads, 63 sectors/track, 13054 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x000ee169

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          64      512000   83  Linux

Partition 1 does not end on cylinder boundary.

/dev/sda2              64         325     2097152   82  Linux swap / Solaris

Partition 2 does not end on cylinder boundary.

/dev/sda3             325       13055   102247424   83  Linux

Disk /dev/sdb: 107.4 GB, 107374182400 bytes

255 heads, 63 sectors/track, 13054 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

可选操作:分区

fdisk /dev/sdb

n

p

1

w

格式磁盘

mkfs.xfs -f /dev/sdb

挂载

mkdir -p /storage/brick1

mount /dev/sdb /storage/brick1

查看挂载结果

[root@glfs3 ~]# df -h

Filesystem      Size  Used Avail Use% Mounted on

/dev/sda3        96G  768M   91G   1% /

tmpfs           499M     0  499M   0% /dev/shm

/dev/sda1       477M   28M  425M   7% /boot

/dev/sdb        100G   33M  100G   1% /storage/brick1

在 /etc/fstab  文件中加入,保证开机挂载

echo "/dev/sdb  /storage/brick1    xfs defaults 0 0"  >> /etc/fstab

3.创建volume及其他操作

卷的类型和创建卷

distributed:分布式卷,文件通过hash算法随机的分布到由bricks组成的卷上。通常用于扩展存储能力,不支持数据的冗余。除非底层的brick使用RAID等外部的冗余措施。

gluster volume create gv1  glfs1:/storage/brick1 glfs2:/storage/brick1  force  ## 可以扩展存储能力

replicated:复制式卷,类似raid1,replica数必须等于volume中brick所包含的存储服务器数,可用性高。

gluster volume create gv2 repl 2 glfs3:/storage/brick2 glfs4:/storage/brick2 force

gluster volume create gv5  replica 4 glfs1:/storage/brick1 glfs2:/storage/brick1 glfs3:/storage/brick1  glfs4:/storage/brick1  force

striped:条带式卷,类似raid0,stripe数必须等于volume中brick所包含的存储服务器数,将单个文件分成小块(块大小支持配置,默认为128K),然后将小块存储在不同的brick上,以提升文件的访问性能。

gluster volume create gv3 stripe 2 glfs1:/storage/brick2 glfs2:/storage/brick2 force

distribute striped:分布式条带卷,volume中brick所包含的存储服务器数必须是stripe的倍数(>=2倍),兼顾分布式和条带式的功能。

gluster volume create gv5 stripe 2 replica 2 glfs1:/storage/brick1 glfs2:/storage/brick1 glfs3:/storage/brick1  glfs4:/storage/brick1  force

distribute replicated:分布式复制卷,volume中brick所包含的存储服务器数必须是replica的倍数(>=2倍),兼顾分布式和复制式的功能。

gluster volume create gv7  replica 2 glfs1:/storage/brick3 glfs2:/storage/brick3 glfs3:/storage/brick3  glfs4:/storage/brick3  force

distribute replicated 分布式复制卷是80%企业的选择

-----------------------------------------------------------------------------------------------

3.1distributed:分布式卷

创建分布式卷

gluster volume create gv1  glfs1:/storage/brick1 glfs2:/storage/brick1 force

启动卷

gluster volume start gv1

查看卷状态

$ gluster volume info

Volume Name: gv1

Type: Distribute

Volume ID: 341d62b9-936e-474a-ae39-fb3ac195cd41

Status: Started

Number of Bricks: 2

Transport-type: tcp

Bricks:

Brick1: glfs1:/storage/brick1

Brick2: glfs2:/storage/brick1

Options Reconfigured:

performance.readdir-ahead: on

挂载卷到目录

[root@glfs3 ~]# mount -t glusterfs 127.0.0.1:/gv1 /mnt

[root@glfs3 ~]# df -h

Filesystem      Size  Used Avail Use% Mounted on

/dev/sda3        96G  771M   91G   1% /

tmpfs           499M     0  499M   0% /dev/shm

/dev/sda1       477M   28M  425M   7% /boot

/dev/sdb        100G   33M  100G   1% /storage/brick1

127.0.0.1:/gv1  200G   65M  200G   1% /mnt

用NFS方式挂载

[root@glfs4 ~]# mount -o mountproto=tcp -t nfs 10.1.1.230:/gv1 /mnt

[root@glfs4 ~]# df -h

Filesystem       Size  Used Avail Use% Mounted on

/dev/sda3         96G  771M   91G   1% /

tmpfs            499M     0  499M   0% /dev/shm

/dev/sda1        477M   28M  425M   7% /boot

/dev/sdb         100G   33M  100G   1% /storage/brick1

10.1.1.230:/gv1  200G   64M  200G   1% /mnt

3.2replicated:复制式卷

创建复制式卷

gluster volume create gv2 repl 2 glfs3:/storage/brick2 glfs4:/storage/brick2 force

[root@glfs4 ~]# gluster volume start gv2

volume start: gv2: success

[root@glfs4 ~]# gluster volume  info gv2

Volume Name: gv2

Type: Replicate

Volume ID: daa5f5d7-7337-4931-9b04-af4a340a44fe

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: glfs3:/storage/brick2

Brick2: glfs4:/storage/brick2

Options Reconfigured:

performance.readdir-ahead: on

[root@glfs1 ~]# mount -t glusterfs 127.0.0.1:/gv2 /mnt

[root@glfs1 ~]# df -h

Filesystem      Size  Used Avail Use% Mounted on

/dev/sda3        96G  771M   91G   1% /

tmpfs           499M     0  499M   0% /dev/shm

/dev/sda1       477M   28M  425M   7% /boot

/dev/sdb        100G   33M  100G   1% /storage/brick1

127.0.0.1:/gv2  100G   33M  100G   1% /mnt

3.3striped:条带式卷

[root@glfs3 ~]# gluster volume create gv3 stripe 2 glfs1:/storage/brick2 glfs2:/storage/brick2 force

[root@glfs3 ~]# gluster volume start gv3

volume start: gv3: success

[root@glfs3 ~]# gluster volume info gv3

Volume Name: gv3

Type: Stripe

Volume ID: 4a231cfc-7a73-4c1d-91bc-302be249861b

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: glfs1:/storage/brick2

Brick2: glfs2:/storage/brick2

Options Reconfigured:

performance.readdir-ahead: on

[root@glfs2 ~]# mount -t glusterfs 127.0.0.1:/gv1 /gv1

[root@glfs2 ~]# mount -t glusterfs 127.0.0.1:/gv2 /gv2

[root@glfs2 ~]# mount -t glusterfs 127.0.0.1:/gv3 /gv3

[root@glfs2 ~]# df -h

Filesystem      Size  Used Avail Use% Mounted on

/dev/sda3        96G  769M   91G   1% /

tmpfs           499M     0  499M   0% /dev/shm

/dev/sda1       477M   28M  425M   7% /boot

/dev/sdb        100G   33M  100G   1% /storage/brick1

/dev/sdc        100G   33M  100G   1% /storage/brick2

127.0.0.1:/gv1  200G   65M  200G   1% /gv1

127.0.0.1:/gv3  200G   65M  200G   1% /gv3

127.0.0.1:/gv2  100G   33M  100G   1% /gv2

[root@glfs2 gv3]# dd if=/dev/zero bs=1024 count=10000 of=/gv3/10M.file

[root@glfs2 gv3]# dd if=/dev/zero bs=1024 count=20000 of=/gv3/20M.file

[root@glfs2 gv3]# ls -lh

total 30M

-rw-r--r-- 1 root root 9.8M Mar  1 14:12 10M.file

-rw-r--r-- 1 root root  20M Mar  1 14:14 20M.file

查看文件是否被分隔到不同的服务器中保存了

[root@glfs1 brick2]# ls -lh /storage/brick2/

total 15M

-rw-r--r-- 2 root root 4.9M Mar  1 14:12 10M.file

-rw-r--r-- 2 root root 9.8M Mar  1 14:14 20M.file

[root@glfs2 gv3]# ls -lh /storage/brick2/

total 15M

-rw-r--r-- 2 root root 4.9M Mar  1 14:12 10M.file

-rw-r--r-- 2 root root 9.8M Mar  1 14:14 20M.file

拷贝一个日志文件查看下存储的情况

[root@glfs2 gv3]# ls -lh /var/log/messages-20180228

-rw-------. 1 root root 366K Feb 28 18:20 /var/log/messages-20180228

[root@glfs2 gv3]# cp /var/log/messages-20180228 .

[root@glfs2 gv3]# ls -lh /gv3

total 30M

-rw-r--r-- 1 root root 9.8M Mar  1 14:12 10M.file

-rw-r--r-- 1 root root  20M Mar  1 14:14 20M.file

-rw------- 1 root root 366K Mar  1 16:46 messages-20180228

[root@glfs2 gv3]# ls -lh /storage/brick2/

total 15M

-rw-r--r-- 2 root root 4.9M Mar  1 14:12 10M.file

-rw-r--r-- 2 root root 9.8M Mar  1 14:14 20M.file

-rw------- 2 root root 128K Mar  1 16:46 messages-20180228

[root@glfs1 brick2]# ls -lh /storage/brick2/

total 15M

-rw-r--r-- 2 root root 4.9M Mar  1 14:12 10M.file

-rw-r--r-- 2 root root 9.8M Mar  1 14:14 20M.file

-rw------- 2 root root 238K Mar  1 16:46 messages-20180228

-----------------------------------------------------------------------

3.4 Distributed-Replicate:分布式复制卷 (核心内容,一般企业使用)

[root@glfs1 ~]# mkfs.xfs -f /dev/sdd

[root@glfs1 ~]# mkdir -p /storage/brick3

[root@glfs1 ~]# mount /dev/sdd /storage/brick3

[root@glfs1 ~]# df -h

Filesystem      Size  Used Avail Use% Mounted on

/dev/sda3        96G  790M   91G   1% /

tmpfs           499M     0  499M   0% /dev/shm

/dev/sda1       477M   28M  425M   7% /boot

/dev/sdb        100G   43M  100G   1% /storage/brick1

/dev/sdc        100G   33M  100G   1% /storage/brick2

/dev/sdd       100G   33M  100G   1% /storage/brick3

[root@glfs2 ~]# echo "/dev/sdd  /storage/brick3    xfs defaults 0 0"  >> /etc/fstab

[root@glfs2 ~]# mount -a

[root@glfs2 ~]# gluster volume create gv7  replica 2 glfs1:/storage/brick3 glfs2:/storage/brick3 glfs3:/storage/brick3  glfs4:/storage/brick3  force

[root@glfs2 ~]# gluster volume start gv7

[root@glfs2 ~]# gluster volume info gv7

Volume Name: gv7

Type: Distributed-Replicate

Volume ID: 59b3a0de-48b9-4de0-b39a-174481bfd28e

Status: Started

Number of Bricks: 2 x 2 = 4

Transport-type: tcp

Bricks:

Brick1: glfs1:/storage/brick3

Brick2: glfs2:/storage/brick3

Brick3: glfs3:/storage/brick3

Brick4: glfs4:/storage/brick3

Options Reconfigured:

performance.readdir-ahead: on

#多个文件存储的时候在2组brick上哈希存储,每个文件存储2份

[root@glfs2 ~]# mkdir -pv /gv7

[root@glfs2 ~]# mount -t glusterfs 127.0.0.1:/gv7 /gv7

上面的内容完成了分布式复制卷的创建和挂载,下面的内容将显示出分布式复制卷的特性(单个文件在复本卷内数据保持复制,不同文件在不同复本卷之间进行分布。)

[root@glfs1 gv7]# ls -lh /var/log/ | grep ana

-rw-------. 1 root root 5.6K Jan 30 19:11 anaconda.ifcfg.log

-rw-------. 1 root root  23K Jan 30 19:11 anaconda.log

-rw-------. 1 root root  42K Jan 30 19:11 anaconda.program.log

-rw-------. 1 root root 185K Jan 30 19:11 anaconda.storage.log

-rw-------. 1 root root 119K Jan 30 19:11 anaconda.syslog

-rw-------. 1 root root  23K Jan 30 19:11 anaconda.xlog

-rw-------. 1 root root 2.6K Jan 30 19:11 anaconda.yum.log

[root@glfs1 gv7]# ls -lh /var/log/ | grep ana | wc -l

7

[root@glfs1 gv7]# cp /var/log/ana* .

[root@glfs1 gv7]# ls -lh .

total 399K

-rw------- 1 root root 5.6K Mar  2 10:32 anaconda.ifcfg.log

-rw------- 1 root root  23K Mar  2 10:32 anaconda.log

-rw------- 1 root root  42K Mar  2 10:32 anaconda.program.log

-rw------- 1 root root 185K Mar  2 10:32 anaconda.storage.log

-rw------- 1 root root 119K Mar  2 10:32 anaconda.syslog

-rw------- 1 root root  23K Mar  2 10:32 anaconda.xlog

-rw------- 1 root root 2.6K Mar  2 10:32 anaconda.yum.log

[root@glfs1 gv7]# ls -lh . | grep ana | wc -l

7

[root@glfs1 gv7]# ls -lh /storage/brick3

total 248K

-rw------- 2 root root 5.6K Mar  2 10:32 anaconda.ifcfg.log

-rw------- 2 root root  23K Mar  2 10:32 anaconda.log

-rw------- 2 root root 185K Mar  2 10:32 anaconda.storage.log

-rw------- 2 root root  23K Mar  2 10:32 anaconda.xlog

-rw------- 2 root root 2.6K Mar  2 10:32 anaconda.yum.log

[root@glfs1 gv7]# ls -lh /storage/brick3 | grep ana | wc -l

5

[root@glfs2 ~]# ls -lh /storage/brick3

total 248K

-rw------- 2 root root 5.6K Mar  2 10:32 anaconda.ifcfg.log

-rw------- 2 root root  23K Mar  2 10:32 anaconda.log

-rw------- 2 root root 185K Mar  2 10:32 anaconda.storage.log

-rw------- 2 root root  23K Mar  2 10:32 anaconda.xlog

-rw------- 2 root root 2.6K Mar  2 10:32 anaconda.yum.log

[root@glfs2 ~]# ls -lh /storage/brick3 | grep ana | wc -l

5

[root@glfs3 ~]# ls -lh /storage/brick3

total 164K

-rw------- 2 root root  42K Mar  2 10:32 anaconda.program.log

-rw------- 2 root root 119K Mar  2 10:32 anaconda.syslog

[root@glfs3 ~]# ls -lh /storage/brick3 | grep ana | wc -l

2

[root@glfs4 ~]# ls -lh /storage/brick3

total 164K

-rw------- 2 root root  42K Mar  2 10:32 anaconda.program.log

-rw------- 2 root root 119K Mar  2 10:32 anaconda.syslog

[root@glfs4 ~]# ls -lh /storage/brick3 | grep ana | wc -l

2

注意:

复本卷的组成依赖于指定brick的顺序

brick必须为复本数K的N倍,brick列表将以K个为一组,形成N个复本卷

-----------------------------------------------------------------------

3.5 distribute striped:分布式条带卷

数据将进行切片,切片在复本卷内进行复制,在不同卷间进行分布。

[root@glfs2 ~]# gluster volume create gv5 stripe 2 replica 2 glfs1:/storage/brick1 glfs2:/storage/brick1 glfs3:/storage/brick1  glfs4:/storage/brick1  force

[root@glfs2 ~]# gluster volume start gv5

[root@glfs2 ~]# gluster volume info gv5

Volume Name: gv5

Type: Striped-Replicate

Volume ID: edfe58ad-7174-495e-821a-400bc7df9a1e

Status: Started

Number of Bricks: 1 x 2 x 2 = 4

Transport-type: tcp

Bricks:

Brick1: glfs1:/storage/brick1

Brick2: glfs2:/storage/brick1

Brick3: glfs3:/storage/brick1

Brick4: glfs4:/storage/brick1

Options Reconfigured:

performance.readdir-ahead: on

[root@glfs1 gv6]# cp /var/log/messages-20180228 /gv5

[root@glfs1 gv6]# ls -lh /gv5/

total 20M

-rw-r--r-- 1 root root  20M Mar  2 09:51 gv5M.file

-rw------- 1 root root 422K Mar  2 10:08 messages-20180228

[root@glfs1 gv6]# ls -lh /storage/brick1/messages-20180228

-rw------- 2 root root 256K Mar  2 10:08 /storage/brick1/messages-20180228

[root@glfs2 ~]# ls -lh /storage/brick1/messages-20180228

-rw------- 2 root root 256K Mar  2 10:08 /storage/brick1/messages-20180228

[root@glfs3 ~]# ls -lh /storage/brick1/messages-20180228

-rw------- 2 root root 166K Mar  2 10:08 /storage/brick1/messages-20180228

[root@glfs4 ~]# ls -lh /storage/brick1/messages-20180228

-rw------- 2 root root 166K Mar  2 10:08 /storage/brick1/messages-20180228

3.6 distributed striped replicated volume:分布式条带复制卷 (核心内容,推荐生产环境使用)

# mkfs.xfs -f /dev/sdc

# mkdir -p /storage/brick2

# mkfs.xfs -f /dev/sdd

# mkdir -p /storage/brick3

# mount /dev/sdc /storage/brick2

# mount /dev/sdd /storage/brick3

# df -h

Filesystem      Size  Used Avail Use% Mounted on

/dev/sda3        96G  790M   91G   1% /

tmpfs           499M     0  499M   0% /dev/shm

/dev/sda1       477M   28M  425M   7% /boot

/dev/sdb        100G   43M  100G   1% /storage/brick1

/dev/sdc        100G   33M  100G   1% /storage/brick2

/dev/sdd       100G   33M  100G   1% /storage/brick3

# echo "/dev/sdc  /storage/brick2    xfs defaults 0 0"  >> /etc/fstab

# echo "/dev/sdd  /storage/brick3    xfs defaults 0 0"  >> /etc/fstab

# gluster volume create gv9 stripe 2 replica 2  glfs1:/storage/brick2 glfs2:/storage/brick2 glfs3:/storage/brick2  glfs4:/storage/brick2 glfs1:/storage/brick3 glfs2:/storage/brick3 glfs3:/storage/brick3  glfs4:/storage/brick3 force

# gluster volume start gv9

[root@glfs2 ~]# gluster volume info gv9

Volume Name: gv9

Type: Distributed-Striped-Replicate

Volume ID: 59b3a0de-48b9-4de0-b39a-174481bfd18e

Status: Started

Number of Bricks: 2 x 2 x 2 = 8

Transport-type: tcp

#多个文件存储的时候在4组brick上哈希存储,每个文件存储2份,并且划分条带

[root@glfs2 ~]# mkdir -pv /gv9

[root@glfs2 ~]# mount -t glusterfs 127.0.0.1:/gv9 /gv9

注意:bricks数量为stripe个数N,和repl个数M的积N*M的整数倍

glfs1:/storage/brick2 glfs2:/storage/brick2 glfs3:/storage/brick2 glfs4:/storage/brick2组成一个分布卷,glfs1:/storage/brick2 glfs2:/storage/brick2组成一个stripe卷,glfs3:/storage/brick2 glfs4:/storage/brick2组成另一个stripe卷,glfs1:/storage/brick2和glfs2:/storage/brick2,glfs3:/storage/brick2 和glfs4:/storage/brick2互为复本卷

glfs1:/storage/brick3 glfs2:/storage/brick3 glfs3:/storage/brick3  glfs4:/storage/brick3组成另一个分布卷,略。

3.7如果以后要添加服务器,扩容卷可以使用add-brick命令

停止卷

[root@glfs2 gv3]# gluster volume stop gv2

Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y

volume stop: gv2: success

[root@glfs3 brick2]# gluster volume add-brick gv2 replica 2 glfs3:/storage/brick1 glfs4:/storage/brick1 force

[root@glfs3 brick2]# gluster volume start gv2

volume start: gv2: success

[root@glfs3 brick2]# gluster volume info gv2

Volume Name: gv2

Type: Distributed-Replicate

Volume ID: daa5f5d7-7337-4931-9b04-af4a340a44fe

Status: Started

Number of Bricks: 2 x 2 = 4

Transport-type: tcp

Bricks:

Brick1: glfs3:/storage/brick2

Brick2: glfs4:/storage/brick2

Brick3: glfs3:/storage/brick1

Brick4: glfs4:/storage/brick1

Options Reconfigured:

performance.readdir-ahead: on

[root@glfs3 brick2]# df -h

Filesystem      Size  Used Avail Use% Mounted on

/dev/sda3        96G  770M   91G   1% /

tmpfs           499M     0  499M   0% /dev/shm

/dev/sda1       477M   28M  425M   7% /boot

/dev/sdb        100G   91M  100G   1% /storage/brick2

127.0.0.1:/gv2  100G   91M  100G   1% /gv2

127.0.0.1:/gv1  200G   65M  200G   1% /gv1

127.0.0.1:/gv3  200G   94M  200G   1% /gv3

/dev/sdc        100G   33M  100G   1% /storage/brick1

[root@glfs4 brick2]# ls -lh /storage/brick{1,2}

/storage/brick1:

total 0

/storage/brick2:

total 59M

-rw-r--r-- 2 root root 9.8M Mar  1 14:32 10M.file

-rw-r--r-- 2 root root  20M Mar  1 14:32 20M.file

-rw-r--r-- 2 root root  30M Mar  1 15:20 30M.file

但是你会发现新加服务器的磁盘中并没有老的数据,要使新布局生效,重新平衡卷中的数据,还需要对卷中的数据进行平衡。

执行命令:

[root@glfs4 brick2]# gluster volume rebalance gv2 start

[root@glfs4 brick2]# gluster volume rebalance gv2 start

volume rebalance: gv2: success: Rebalance on gv2 has been started successfully. Use rebalance status command to check status of the rebalance process.

ID: 0de2e148-9952-4ffe-b51c-a79b96a820de

[root@glfs4 brick2]# ls -lh /storage/brick{1,2}

/storage/brick1:

total 20M

-rw-r--r-- 2 root root 20M Mar  1 14:32 20M.file

/storage/brick2:

total 40M

-rw-r--r-- 2 root root 9.8M Mar  1 14:32 10M.file

-rw-r--r-- 2 root root  30M Mar  1 15:20 30M.file

增加文件后的效果

[root@glfs3 brick2]# ls -lh /storage/brick{1,2}

/storage/brick1:

total 49M

-rw-r--r-- 2 root root 9.8M Mar  1 16:02 10-123M.file

-rw-r--r-- 2 root root 9.8M Mar  1 16:01 10-12M.file

-rw-r--r-- 2 root root 9.8M Mar  1 16:01 10-13M.file

-rw-r--r-- 2 root root  20M Mar  1 14:32 20M.file

/storage/brick2:

total 69M

-rw-r--r-- 2 root root 9.8M Mar  1 16:01 10-1M.file

-rw-r--r-- 2 root root 9.8M Mar  1 16:02 10-7M.file

-rw-r--r-- 2 root root 9.8M Mar  1 16:02 10-9M.file

-rw-r--r-- 2 root root 9.8M Mar  1 14:32 10M.file

-rw-r--r-- 2 root root  30M Mar  1 15:20 30M.file

查看数据平衡情况

[root@glfs4 brick2]# gluster volume rebalance gv2 status

                                    Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s

                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------

                               localhost                0        0Bytes             0             0             0            completed        0:0:0

                                   glfs3                0        0Bytes             3             0             0            completed        0:0:0

volume rebalance: gv2: success

3.8 缩容卷 remove-brick

[root@glfs2 ~]# ls -lh /gv2

total 118M

-rw-r--r-- 1 root root 9.8M Mar  1 16:02 10-123M.file

-rw-r--r-- 1 root root 9.8M Mar  1 16:01 10-12M.file

-rw-r--r-- 1 root root 9.8M Mar  1 16:01 10-13M.file

-rw-r--r-- 1 root root 9.8M Mar  1 16:01 10-1M.file

-rw-r--r-- 1 root root 9.8M Mar  1 16:02 10-7M.file

-rw-r--r-- 1 root root 9.8M Mar  1 16:02 10-9M.file

-rw-r--r-- 1 root root 9.8M Mar  1 14:32 10M.file

-rw-r--r-- 1 root root  20M Mar  1 14:32 20M.file

-rw-r--r-- 1 root root  30M Mar  1 15:20 30M.file

[root@glfs4 brick2]# gluster volume stop gv2

[root@glfs4 brick2]# gluster volume remove-brick gv2 replica 2 glfs3:/storage/brick2 glfs4:/storage/brick2 force

Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y

volume remove-brick commit force: success

[root@glfs4 brick2]# gluster volume start gv2

再看数据,发现有一半的数据丢失了,但是数据在服务器里还是存在的,只是不在卷里显示出来

[root@glfs2 ~]# ls -lh /gv2

total 49M

-rw-r--r-- 1 root root 9.8M Mar  1 16:02 10-123M.file

-rw-r--r-- 1 root root 9.8M Mar  1 16:01 10-12M.file

-rw-r--r-- 1 root root 9.8M Mar  1 16:01 10-13M.file

-rw-r--r-- 1 root root  20M Mar  1 14:32 20M.file

重新加回数据,执行下面的命令

[root@glfs4 brick2]# gluster volume stop gv2

[root@glfs4 brick2]# gluster volume add-brick gv2 replica 2 glfs3:/storage/brick2 glfs4:/storage/brick2 force

[root@glfs4 brick2]# gluster volume start gv2

[root@glfs2 ~]# ls -lh /gv2

total 118M

-rw-r--r-- 1 root root 9.8M Mar  1 16:02 10-123M.file

-rw-r--r-- 1 root root 9.8M Mar  1 16:01 10-12M.file

-rw-r--r-- 1 root root 9.8M Mar  1 16:01 10-13M.file

-rw-r--r-- 1 root root 9.8M Mar  1 16:01 10-1M.file

-rw-r--r-- 1 root root 9.8M Mar  1 16:02 10-7M.file

-rw-r--r-- 1 root root 9.8M Mar  1 16:02 10-9M.file

-rw-r--r-- 1 root root 9.8M Mar  1 14:32 10M.file

-rw-r--r-- 1 root root  20M Mar  1 14:32 20M.file

-rw-r--r-- 1 root root  30M Mar  1 15:20 30M.file

3.9 删除卷 delete

[root@glfs4 brick2]# umount /gv2

[root@glfs4 brick2]# gluster volume stop gv2

Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y

volume stop: gv2: success

[root@glfs4 brick2]# gluster volume delete gv2

Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y

volume delete: gv2: success

查看到卷没有了,但是文件不在卷里了,因为卷都没有了,文件还在磁盘中。

[root@glfs4 brick2]# gluster volume info gv2

Volume gv2 does not exist

关于报错:

#  mount -t glusterfs 10.30.1.231:/gv9 /var/lib/nova/instances/

Mount failed. Check the log file  for more details.

[root@node3 ~]# tail -f  /var/log/glusterfs/var-lib-nova-instances.log

[2020-01-12 02:18:19.077178] I [MSGID: 100030] [glusterfsd.c:2867:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 7.1 (args: /usr/sbin/glusterfs --process-name fuse --volfile-server=10.30.1.231 --volfile-id=/gv9 /var/lib/nova/instances)

[2020-01-12 02:18:19.078458] I [glusterfsd.c:2594:daemonize] 0-glusterfs: Pid of current running process is 5075

[2020-01-12 02:18:19.086881] I [MSGID: 101190] [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0

[2020-01-12 02:18:19.087028] I [MSGID: 101190] [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1

[2020-01-12 02:18:19.096996] I [MSGID: 114020] [client.c:2436:notify] 0-gv9-client-0: parent translators are ready, attempting connect on transport

[2020-01-12 02:18:19.111082] E [MSGID: 101075] [common-utils.c:505:gf_resolve_ip6] 0-resolver: getaddrinfo failed (family:2) (Name or service not known)

[2020-01-12 02:18:19.111130] E [name.c:266:af_inet_client_get_remote_sockaddr] 0-gv9-client-0: DNS resolution failed on host glfs1

[2020-01-12 02:18:19.111223] I [MSGID: 114020] [client.c:2436:notify] 0-gv9-client-1: parent translators are ready, attempting connect on transport

[2020-01-12 02:18:19.111275] E [MSGID: 108006] [afr-common.c:5358:__afr_handle_child_down_event] 0-gv9-replicate-0: All subvolumes are down. Going offline until at least one of them comes back up.

[2020-01-12 02:18:19.118817] E [MSGID: 101075] [common-utils.c:505:gf_resolve_ip6] 0-resolver: getaddrinfo failed (family:2) (Name or service not known)

[2020-01-12 02:18:19.118865] E [name.c:266:af_inet_client_get_remote_sockaddr] 0-gv9-client-1: DNS resolution failed on host glfs2

[2020-01-12 02:18:19.118931] I [MSGID: 114020] [client.c:2436:notify] 0-gv9-client-2: parent translators are ready, attempting connect on transport

[2020-01-12 02:18:19.118957] E [MSGID: 108006] [afr-common.c:5358:__afr_handle_child_down_event] 0-gv9-replicate-0: All subvolumes are down. Going offline until at least one of them comes back up.

[2020-01-12 02:18:19.125862] E [MSGID: 101075] [common-utils.c:505:gf_resolve_ip6] 0-resolver: getaddrinfo failed (family:2) (Name or service not known)

[2020-01-12 02:18:19.125899] E [name.c:266:af_inet_client_get_remote_sockaddr] 0-gv9-client-2: DNS resolution failed on host glfs3

[2020-01-12 02:18:19.125957] I [MSGID: 114020] [client.c:2436:notify] 0-gv9-client-3: parent translators are ready, attempting connect on transport

[2020-01-12 02:18:19.125984] E [MSGID: 108006] [afr-common.c:5358:__afr_handle_child_down_event] 0-gv9-replicate-1: All subvolumes are down. Going offline until at least one of them comes back up.

[2020-01-12 02:18:19.133110] E [MSGID: 101075] [common-utils.c:505:gf_resolve_ip6] 0-resolver: getaddrinfo failed (family:2) (Name or service not known)

[2020-01-12 02:18:19.133141] E [name.c:266:af_inet_client_get_remote_sockaddr] 0-gv9-client-3: DNS resolution failed on host glfs4

[2020-01-12 02:18:19.133215] I [MSGID: 114020] [client.c:2436:notify] 0-gv9-client-4: parent translators are ready, attempting connect on transport

[2020-01-12 02:18:19.133296] E [MSGID: 108006] [afr-common.c:5358:__afr_handle_child_down_event] 0-gv9-replicate-1: All subvolumes are down. Going offline until at least one of them comes back up.

[2020-01-12 02:18:19.140598] E [MSGID: 101075] [common-utils.c:505:gf_resolve_ip6] 0-resolver: getaddrinfo failed (family:2) (Name or service not known)

[2020-01-12 02:18:19.140630] E [name.c:266:af_inet_client_get_remote_sockaddr] 0-gv9-client-4: DNS resolution failed on host glfs1

[2020-01-12 02:18:19.140701] I [MSGID: 114020] [client.c:2436:notify] 0-gv9-client-5: parent translators are ready, attempting connect on transport

[2020-01-12 02:18:19.140759] E [MSGID: 108006] [afr-common.c:5358:__afr_handle_child_down_event] 0-gv9-replicate-2: All subvolumes are down. Going offline until at least one of them comes back up.

[2020-01-12 02:18:19.148220] E [MSGID: 101075] [common-utils.c:505:gf_resolve_ip6] 0-resolver: getaddrinfo failed (family:2) (Name or service not known)

[2020-01-12 02:18:19.148260] E [name.c:266:af_inet_client_get_remote_sockaddr] 0-gv9-client-5: DNS resolution failed on host glfs2

[2020-01-12 02:18:19.148321] I [MSGID: 114020] [client.c:2436:notify] 0-gv9-client-6: parent translators are ready, attempting connect on transport

[2020-01-12 02:18:19.148357] E [MSGID: 108006] [afr-common.c:5358:__afr_handle_child_down_event] 0-gv9-replicate-2: All subvolumes are down. Going offline until at least one of them comes back up.

[2020-01-12 02:18:19.155769] E [MSGID: 101075] [common-utils.c:505:gf_resolve_ip6] 0-resolver: getaddrinfo failed (family:2) (Name or service not known)

[2020-01-12 02:18:19.155831] E [name.c:266:af_inet_client_get_remote_sockaddr] 0-gv9-client-6: DNS resolution failed on host glfs3

[2020-01-12 02:18:19.155903] I [MSGID: 114020] [client.c:2436:notify] 0-gv9-client-7: parent translators are ready, attempting connect on transport

[2020-01-12 02:18:19.155935] E [MSGID: 108006] [afr-common.c:5358:__afr_handle_child_down_event] 0-gv9-replicate-3: All subvolumes are down. Going offline until at least one of them comes back up.

[2020-01-12 02:18:19.163045] E [MSGID: 101075] [common-utils.c:505:gf_resolve_ip6] 0-resolver: getaddrinfo failed (family:2) (Name or service not known)

[2020-01-12 02:18:19.163083] E [name.c:266:af_inet_client_get_remote_sockaddr] 0-gv9-client-7: DNS resolution failed on host glfs4

查看了日志发现应该是openstack计算节点不知道主机名和IP对应关系导致的,在所有节点上的/etc/hosts添加相应解析

10.30.1.231    glfs1

10.30.1.232    glfs2

10.30.1.233    glfs3

10.30.1.234    glfs4

参考文档:

红帽官网关于GlusterFS的文档:https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html-single/Administration_Guide/index.html

参考博客:http://blog.csdn.net/zzulp/article/details/39527441

All Subvolumes Are Down. Going Offline Until Atleast One of Them Comes Back Up

Source: https://www.cnblogs.com/dexter-wang/articles/12235785.html

0 Response to "All Subvolumes Are Down. Going Offline Until Atleast One of Them Comes Back Up"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel