共计 8877 个字符,预计需要花费 23 分钟才能阅读完成。
这篇文章给大家介绍如何分析 iscsi、nfs 与 ceph,内容非常详细,感兴趣的小伙伴们可以参考借鉴,希望对大家能有所帮助。
一、iscsi 设置
1. 服务器端
[1]Install administration tools.[root@dlp ~]#
yum -y install scsi-target-utils
[2] Configure iSCSI Target .
For example, create a disk image under the [/iscsi_disks] directory and set it as a shared disk.
# create a disk image
[root@dlp ~]# mkdir /iscsi_disks
[root@dlp ~]# dd if=/dev/zero of=/iscsi_disks/disk01.img count=0 bs=1 seek=80G
[root@dlp ~]# vi /etc/tgt/targets.conf
# add follows to the end
# if you set some devices, add target - /target and set the same way with follows
# naming rule : [ iqn.yaer-month.domain:any name ]
target iqn.2014-08.world.server:target00
# provided devicce as a iSCSI target
backing-store /iscsi_disks/disk01.img
# iSCSI Initiator s IP address you allow to connect
initiator-address 10.0.0.31
# authentication info ( set anyone you like for username , password )
incominguser username password
/target
[root@dlp ~]# /etc/rc.d/init.d/tgtd start
Starting SCSI target daemon: [ OK ]
[root@dlp ~]# chkconfig tgtd on
# confirm status
[root@dlp ~]# tgtadm --mode target --op show
Target 1: iqn.2014-08.world.server:target00
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 85899 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /iscsi_disks/disk01.img
Backing store flags:
Account information:
username
ACL information:
10.0.0.31
2. 客户端配置
[1] Configure iSCSI Initiator.
[root@www ~]# yum -y install iscsi-initiator-utils
[root@www ~]# vi /etc/iscsi/iscsid.conf
# line 53: uncomment
node.session.auth.authmethod = CHAP
# line 57,58: uncomment and set username and password which set on iSCSI Target
node.session.auth.username = username
node.session.auth.password = password
# discover target
[root@www ~]# iscsiadm -m discovery -t sendtargets -p 10.0.0.30
Starting iscsid: [ OK ]
10.0.0.30:3260,1 iqn.2014-08.world.server:target00
# confirm status after discovery
[root@www ~]# iscsiadm -m node -o show
# BEGIN RECORD 6.2.0-873.10.el6
node.name = iqn.2014-08.world.server:target00
node.tpgt = 1
node.startup = automatic
node.leading_login = No
node.conn[0].iscsi.IFMarker = No
node.conn[0].iscsi.OFMarker = No
# END RECORD
# login to target
[root@www ~]# iscsiadm -m node --login
Logging in to [iface: default, target: iqn.2014-08.world.server:target00, portal: 10.0.0.30,3260] (multiple)
Login to [iface: default, target: iqn.2014-08.world.server:target00, portal: 10.0.0.30,3260] successful.
# confirm established session
[root@www ~]# iscsiadm -m session -o show
tcp: [1] 10.0.0.30:3260,1 iqn.2014-08.world.server:target00
# confirm partitions
[root@www ~]# cat /proc/partitions
major minor #blocks name
8 0 209715200 sda
8 1 512000 sda1
8 2 209202176 sda2
253 0 200966144 dm-0
253 1 8232960 dm-1
8 16 83886080 sdb
# added new device provided from target as [sdb]
[2] It s possible to use iSCSI device like follows.
[root@www ~]# yum -y install parted
# create a label
[root@www ~]# parted --script /dev/sdb mklabel msdos
# create a partition
[root@www ~]# parted --script /dev/sdb mkpart primary 0% 100%
# format with EXT4
[root@www ~]# mkfs.ext4 /dev/sdb1
# mount
[root@www ~]# mount /dev/sdb1 /mnt
[root@www ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/vg_dlp-lv_root
ext4 189G 1.1G 179G 1% /
tmpfs tmpfs 1.9G 0 1.9G 0% /dev/shm
/dev/sda1 ext4 485M 75M 385M 17% /boot
/dev/sdb1 ext4 79G 184M 75G 1% /mnt
补充:
客户端安装后需要启动守护进程,然后进行发现:
/etc/init.d/iscsid start
iscsiadm -m discovery -t st -p 12.123.0.51
iscsiadm -m discovery -T iqn.2015-06.com.oracle:zjxl -p 12.123.0.51:3260 -l
#修改配置文件
vim /var/lib/iscsi/send_targets/12.123.0.51\,3260/iqn.2015-06.com.oracle\:oracle\:zjxl\,12.123.0.51\,3260\,1\,default/default
配置 multipath:
一个机器至少要挂 2 个存储 IP,一个坏了,另一个还可以连接。编辑 /etc/multipath.conf
blacklist{
devnode ^sda
defaults{
user_friendly_names yes
udev_dir /dev
path_grouping_policy multibus
failback immediate
no_path_retry fail
}
blacklist 代表系统盘不做 multipath。
然后 service multipathd restart;执行 cat /proc/partitions 可以看到以 dm-* 开头的设备,这些就是多路径设备了。
二、ceph 设置
其实对于 ceph 来讲,直接通过 rbd create 一个块,然后 rbd map 这个块,再 rbd showmapped 可以看到此块 map 的路径,如 /dev/rbd0。然后在 iscsi 服务器端的 /etc/tgt/targets.conf 进行配置:
target iqn.2008-09.com.example:server.target11
direct-store /dev/rbd0
/target
这样在 iscsi 客户端就能使用此块设备了。
但这样使用的代价比较大,因为它是通过 ceph rbd kernel module 的形式,挂载 rbd。这样会有大量的内核态和用户态的切换,势必会影响性能。
这个问题已经有人解决了:
http://ceph.com/dev-notes/adding-support-for-rbd-to-stgt/
http://ceph.com/dev-notes/updates-to-ceph-tgt-iscsi-support/
http://www.sebastien-han.fr/blog/2014/07/07/start-with-the-rbd-support-for-tgt/
在 ubuntu 上的 iscsi 已经支持 rbd;fedora 20 之后有支持 rbd 的 iscsi rpm 包。但在 centos 6.5,无法配置带 rbd 支持的 iscsi。
如果你也配置过此功能,欢迎交流。
三、利用支持 rbd 的 scsi-target-utils
利用上述方法使 ceph 导出 iscsi 设备是没有问题的,但由于其利用了 rbd 的内核模块,导致内核态和用户态的频繁切换,势必会影响性能。可不可以使 rbd 块设备直接被导出成为 iscsi 设备呢?答案是肯定的。
先从以下地址下载支持 rbd 的 scsi-target-utilsrpm 包:
#wget http://ceph.com/packages/ceph-extras/rpm/centos6/x86_64/scsi-target-utils-1.0.38-48.bf6981.ceph.el6.x86_64.rpm
#rpm -ivh scsi-target-utils-1.0.38-48.bf6981.ceph.el6.x86_64.rpm
安装完成后,查看当前 tgt 对于 rbd driver 是否支持:
# tgtadm --lld iscsi --mode system --op show
System:
State: ready
debug: off
LLDs:
iser: error
iscsi: ready
Backing stores:
rbd (bsoflags sync:direct)
rdwr (bsoflags sync:direct)
ssc
null
bsg
sg
sheepdog
Device types:
passthrough
tape
changer
controller
osd
cd/dvd
disk
iSNS:
iSNS=Off
iSNSServerIP=
iSNSServerPort=3205
iSNSAccessControl=Off
创建一个 rbd 设备:
#rbd create iscsi/tgt1 -s 10240
修改 /etc/tgt/targets.conf,导出刚才创建的 rbd 设备:
include /etc/tgt/conf.d/*.conf
target iqn.2014-11.rbdstore.com:iscsi
driver iscsi
bs-type rbd
backing-store iscsi/tgt1
/target
重启 tgt:
# /etc/init.d/tgtd restart
Stopping target framework daemon
Starting target framework daemon
在 iscsi initiator 端连接该 iscsi target:
[root@ceph-osd-1 ~]# iscsiadm -m discovery -t sendtargets -p 10.10.200.165
Starting iscsid: [ OK ]
10.10.200.165:3260,1 iqn.2014-11.rbdstore.com:iscsi
[root@ceph-osd-1 ~]# iscsiadm -m node -T iqn.2014-11.rbdstore.com:iscsi -l
Logging in to [iface: default, target: iqn.2014-11.rbdstore.com:iscsi, portal: 10.10.200.165,3260] (multiple)
Login to [iface: default, target: iqn.2014-11.rbdstore.com:iscsi, portal: 10.10.200.165,3260] successful.
[root@ceph-osd-1 ~]# fdisk -l
Disk /dev/sdb: 5788.2 GB, 5788206759936 bytes
255 heads, 63 sectors/track, 703709 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sda: 209.7 GB, 209715068928 bytes
255 heads, 63 sectors/track, 25496 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0009a9dd
Device Boot Start End Blocks Id System
/dev/sda1 * 1 131 1048576 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 131 1176 8392704 82 Linux swap / Solaris
/dev/sda3 1176 25497 195356672 8e Linux LVM
Disk /dev/mapper/vg_swift-LogVol00: 200.0 GB, 200043134976 bytes
255 heads, 63 sectors/track, 24320 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/docker-253:0-3539142-pool: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 65536 bytes
Disk identifier: 0x00000000
Disk /dev/sdc: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 4194304 bytes
I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes
Disk identifier: 0x00000000
从以上可以看到,导出成功。
四、nfs 设置
在 ceph 的一个节点利用 rbd map 一个块设备,然后格式化并挂载在一个目录,如 /mnt。在此节点上安装 nfs 的 rpm 包:
yum -y install nfs-utils
设置挂载目录:
[root@mon0 mnt]# cat /etc/exports
/mnt 192.168.101.157(rw,async,no_subtree_check,no_root_squash)
/mnt 192.168.108.4(rw,async,no_subtree_check,no_root_squash)
启动并导出:
service rpcbind start
chkconfig rpcbind on
service nfs start
chkconfig nfs on
exportfs -r
客户端查看一下:
[root@osd2 /]# showmount -e mon0
Export list for mon0:
/mnt 192.168.108.4,192.168.101.157
然后挂载:
mount -t nfs mon0:/mnt /mnt
需要注意的是,NFS 默认是用 UDP 协议,如果网络不稳定,换成 TCP 协议即可:
mount -t nfs mon0:/mnt /mnt -o proto=tcp -o nolock
五、rbd-fuse 设置
在客户机上配置 ceph.repo 后安装 rbd-fuse 的 rpm 包,然后就可以挂载 pool 了:
rbd-fuse -p test /mnt
上面的示例是在没有 cephx 下将 test pool 挂载到客户机的 /mnt。然后就可以在 /mnt 中看到 test pool 中的块了。此时可以利用 losetup 挂载这个 img。
卸载直接利用 fusermount -u /mnt。
关于如何分析 iscsi、nfs 与 ceph 就分享到这里了,希望以上内容可以对大家有一定的帮助,可以学到更多知识。如果觉得文章不错,可以把它分享出去让更多的人看到。