资源描述
在RHEL 5上安装并配置iSCSI Initiator软件
RHEL 5已开始在内核中加入了对iSCSI的支持,使用的 iSCSI Initiator软件是Open-iSCSI Initiator,支持万兆网卡,其配置方式与RHEL 4及更早的RedHat Linux发行版中的iSCSI Initiator有很大的区别。
一、安装并配置iSCSI Initiator软件
以下以RHEL 5 x86版本为例介绍如何安装并配置iSCSI initiator。
1.安装iSCSI Initiator
把RHEL5 x86第一张安装光盘挂载到/mnt目录下,之后进入/mnt/Server目录进行安装。
[root@pe03 Server]# cd /mnt/Server/
[root@pe03 Server]# rpm -ivh iscsi-initiator-utils-6.2.0.742-0.5.el5.i386.rpm
warning: iscsi-initiator-utils-6.2.0.742-0.5.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
Preparing... ########################################### [100%]
1:iscsi-initiator-utils ########################################### [100%]
这个安装将iscsidiscsiadm安装到/sbin目录下,它还将把默认的配置文件安装到/etc/iscsi目录下:
/etc/iscsi/iscsid.conf 所有刚发起的iSCSI session默认都将使用这个文件中的参数设定,可以使用iscsiadm命令针对Target的一些参数进行设置。
/etc/iscsi/initiatorname.iscsi 软件iSCSI initiator的intiator名称配置文件。
2.确认iscsi服务随系统启动而启动
用chkconfig检查iscsi和iscsid服务在运行级别3和5中随系统的启动而自动启动
[root@pe03 Server]# chkconfig --list |grep iscsi
iscsi 0:off 1:off 2:off 3:on 4:on 5:on 6:off
iscsid 0:off 1:off 2:off 3:on 4:on 5:on 6:off
如果iscsid和iscsi没有随系统的启动而启动,则用chkconfig设置这两个服务在系统运行级别为3和5时随系统启动而启动
[root@pe03 Server]# chkconfig iscsi --level 35 on
[root@pe03 Server]# chkconfig iscsid --level 35 on
3.设置InitiatorName
用vi编辑/etc/iscsi/initiatorname.iscsi文件,文件内容如下
InitiatorName=iqn.2005-.max:pe03
本例中InitiatorName设置为iqn.2005-.max:pe03
注意:
l InitiatorName这个单词必须注意大小写,同时,必须顶格写,xxxx代表要设 置的initiator名称,请遵循iqn命名规范。
l iqn规范定义的InitiatorName格式为iqn.domaindate. reverse.domain.name:optional name,例如:iqn.2006-.h3c:dbserver。
4.启动iscsi服务
用service iscsi start启动iSCSI服务。
root@pe03 Server]# service iscsi start
Turning off network shutdown. Starting iSCSI daemon: [ OK ]
[ OK ]
Setting up iSCSI targets: [ OK ]
用service iscsi status及service iscsid status查看iscsi相关服务的运行状态
[root@pe03 Server]# service iscsi status
iscsid (pid 3697 3696) is running...
[root@pe03 Server]# service iscsid status
iscsid (pid 3697 3696) is running...
5.分配存储资源,在Linux上执行target的发现
RHEL 5上当前的iSCSI Initiator版本只支持sendtargets 的发现方式,不支持SLP和iSNS
假设后端存储为IX5000,iSCSI业务IP地址为200.200.10.200,则使用下面的命令执行target的发现:
[root@pe03 Server]# iscsiadm -m discovery -t sendtargets -p 200.200.10.200:3260
因为此时还没有在IX5000上创建改initiator并分配卷,这个命令执行成功后没有任何输出,但是此时到IX5000上会查看到有新的initiator生成。
在IX5000上把卷分配给Linux服务器后,再次执行target的发现:
[root@pe03 Server]# iscsiadm -m discovery -t sendtargets -p 200.200.10.200:3260
iscsiadm: unexpected SendTargets data:
200.200.10.200:3260,1 iqn.2007-:h3c:200realm.rhel5
此时发现了一个新的target,target名称为iqn.2007-:h3c:200realm.rhel5
注:在IP SAN存储设备上把相应的存储空间分配给RedHat Linux服务器的具体操作请参照各存储设备相关的指导书
6.登录target
[root@pe03 Server]# iscsiadm -m node -T iqn.2007-:h3c:200realm.rhel5 -p 200.200.10.200:3260 –l
这里-T后面跟target名字,最后的l是login这个词的第一个字母,不是数字1
用iscsiadm -m session –i查看SiCSI session和设备信息
[root@pe03 ~]# iscsiadm -m session -i
iscsiadm version 2.0-742
************************************
Session (sid 0) using module tcp:
************************************
TargetName: iqn.2007-:h3c:200realm.rhel5
Portal Group Tag: 1
Network Portal: 200.200.10.200:3260
iSCSI Connection State: LOGGED IN
Internal iscsid Session State: NO CHANGE
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 65536
MaxXmitDataSegmentLength: 65536
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: No
InitialR2T: Yes
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 3 State: running
scsi3 Channel 00 Id 0 Lun: 0
Attached scsi disk sdb State: running
Open-iSCSI是通过以下iSCSI数据库文件来实现永久配置的:
Discovery (/var/lib/iscsi/send_targets):在/var/lib/iscsi/send_targets目录下,生成一个目标服务器信息文件,文件名为“目标服务的IP,端口号”(例如“200.200.10.200,3260”)。此文件用来记录目标服务器信息。
Node (/var/lib/iscsi/nodes):在/var/lib/iscsi/nodes目录下,生成一个或多个以目标服务器上的Target名命名的文件夹,每文件夹下有一个用来记录目标服务器上特定Target信息的文件。
iscsiadm是用来管理(更新、删除、插入、查询)iSCSI配置数据库文件的命令行工具,用户能够用它对iSCSI nodes、sessions、connections和discovery records进行一系列的操作。
二、.对新发现的磁盘进行分区并创建文件系统
1、先用fdisk –l查看新的磁盘名称,这里我们发现了一个100GB的磁盘,设备名为/dev/sdb
[root@pe03 Server]# fdisk –l
..............................
Disk /dev/sdb: 107.3 GB, 107373133824 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
2、用fdisk /dev/sdb进行分区,本例中我们把整个磁盘分成一个主分区/dev/sdb1
[root@pe03 Server]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
The number of cylinders for this disk is set to 13054.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-13054, default 1): ――此处回车,使用默认值1
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-13054, default 13054): ――此处回车使用默认值13054
Using default value 13054
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
3、用mkfs命令在/dev/sdb1上创建ext3文件系统
[root@pe03 Server]# mkfs -t ext3 /dev/sdb1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
13107200 inodes, 26214055 blocks
1310702 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
800 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 35 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
4、用tune2fs修改文件系统的属性,去掉自动检查的属性:
[root@pe03 Server]# tune2fs -c -1 -i 0 /dev/sdb1
tune2fs 1.39 (29-May-2006)
Setting maximal mount count to -1
Setting interval between checks to 0 seconds
Linux上的ext文件系统有一个特性,对某个分区mount、umount很多次后或者隔一个固定的时间后,系统会对该分区进行检测,这就会导致硬盘反映速度很慢,影响业务,本操作的目的就是去掉文件系统自动检查的属性。
三、设定文件系统的自动挂载
本例中我们将要把/dev/sdb1挂载到/data目录下
1、手动创建一个目录/data
[root@pe03 Server]# mkdir /data
2、用tune2fs查看文件系统的UUID:
[root@pe03 Server]# tune2fs -l /dev/sdb1
tune2fs 1.39 (29-May-2006)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: 3f0a00b7-4939-4ad2-a592-0821bb79f7c6
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal resize_inode dir_index filetype sparse_super large_file
....................
3、用vi编辑/etc/fstab文件,设置自动挂载:
在/etc/fstab文件中增加下面蓝色的一行文字:
/dev/VolGroup00/LogVol00 / ext3 defaults 1 1
LABEL=/boot /boot ext3 defaults 1 2
devpts /dev/pts devpts gid=5,mode=620 0 0
tmpfs /dev/shm tmpfs defaults 0 0
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
/dev/VolGroup00/LogVol01 swap swap defaults 0 0
UUID=3f0a00b7-4939-4ad2-a592-0821bb79f7c6 /data ext3 _netdev 0 0
注意:
l 挂载选项使用的是“_netdev”
l UUID要顶格写。
l Linux系统重启后,磁盘设备的名称可能会发生变化,从而引起文件系统不能挂载上来或者不能正确挂载,使用UUID的方式进行挂载可以解决这个问题,也可以使用给文件系统设置卷标的方式来解决,具体操作步骤可以参见KMS-12541:《在Linux上使用文件系统卷标解决磁盘名称发生变化引起的文件系统不能正确自动挂载的问题》
4、用mount –a挂载文件系统
[root@pe03 Server]# mount –a
5、用df查看文件系统已经挂载成功
[root@pe03 /]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
ext3 65G 2.4G 59G 4% /
/dev/sda1 ext3 99M 12M 82M 13% /boot
tmpfs tmpfs 471M 0 471M 0% /dev/shm
/dev/sdb1 ext3 99G 188M 94G 1% /data
6、条件允许的话,重启一下操作系统,使用df命令确认系统重启后文件系统能够自动挂载上来。
四、FAQ
1、如果存储上的Target下新挂载了一个卷,如何在服务器上连接并识别?
如果一个Target下新增了一个卷,在服务器上使用iscsiadm –m session –R命令可以刷新当前连接的session以看到新增的卷:
[root@as5 as5-target]# iscsiadm -m session –R
可以在/var/lib/iscsi/nodes目录下相应的target目录下使用命令lsscsi或者fdisk -l来查看连接过来的卷,例如下面的sdb为新连接的SAN卷:
[root@as5 as5-target]# lsscsi
[1:0:0:0] disk H3C H3C ISCSI DISK v1.0 /dev/sda
[1:0:0:1] disk H3C H3C ISCSI DISK v1.0 /dev/sdb
附1:
Open-iSCSI modules
附2:
Open-iSCSI README文件
1. In This Release
==================
This file describes the Linux* Open-iSCSI Initiator.
1.1. Features
- highly optimized and very small-footprint data path;
- persistent configuration database;
- SendTargets discovery;
- CHAP;
- PDU header Digest;
- multiple sessions;
For the most recent list of features please refer to:
http://www.open-iscsi.org
2. Introduction
===============
Open-iSCSI project is a high-performance, transport independent,
multi-platform implementation of RFC3720 iSCSI.
Open-iSCSI is partitioned into user and kernel parts.
The kernel portion of Open-iSCSI is a from-scratch code licensed under GPL. The kernel part implements iSCSI data path (that is, iSCSI Read and iSCSI Write), and consists of three loadable modules: scsi_transport_iscsi.ko, libiscsi.ko and iscsi_tcp.ko.
User space contains the entire control plane: configuration manager, iSCSI Discovery, Login and Logout processing,connection-level error processing, Nop-In and Nop-Out handling, and (in the future:) Text processing, iSNS, SLP, Radius, etc.
The user space Open-iSCSI consists of a daemon process called iscsid, and a management utility iscsiadm.
3. Installation
===============
To install the iSCSI tools run:
rpm -ivh iscsi-initiator-utils-<your version>.rpm
This will install iscsid and iscsiadm to /sbin. It will also install
default config files to /etc/iscsi:
/etc/iscsi/iscsid.conf - All new session will inherit settings from
this file when they are first discovered. To override a value for a specific
target see the iscsiadm op command below and in the iscsiadm man page.
See Configuration section below.
/etc/iscsi/initiatorname.iscsi - The default initaitor name for software
iSCSI initiator.
4. Open-iSCSI daemon
====================
The daemon implements control path of iSCSI protocol, plus some management facilities. For example, the daemon could be configured to automatically re-start discovery at startup, based on the contents of persistent iSCSI database (see next section).
For help, run:
./iscsid --help
Usage: iscsid [OPTION]
-c, --config=[path] Execute in the config file (/etc/iscsi/iscsid.conf).
-f, --foreground run iscsid in the foreground
-d, --debug debuglevel print debugging information
-u, --uid=uid run as uid, default is current user
-g, --gid=gid run as gid, default is current user group
-h, --help display this help and exit
-v, --version display version and exit
5. Open-iSCSI Configuration Utility
===================================
Open-iSCSI persistent configuration is implemented as a set of iSCSI database files.
- Discovery (/var/lib/iscsi/send_targets);
- Node (/var/lib/iscsi/nodes).
The iscsiadm utility is a command-line tool to manage (update, delete, insert, query) the persistent database.
The utility presents a set of operations that a user can perform on iSCSI nodes, sessions, connections, and discovery records.
Note that some of the iSCSI Node and iSCSI Discovery operations do not require iSCSI daemon (iscsid) loaded.
For help, run:
./iscsiadm --help
Usage: iscsiadm [OPTION]
-m, --mode <op> specify operational mode op = <discovery|node>
-m discovery --type=[type] --portal=[ip:port] --login
perform [type] discovery for target portal with
ip-address [ip] and port [port]. Initiate Login for
each discovered target if --login is specified
-m discovery display all discovery records from internal
persistent discovery database
-m discovery --portal=[ip:port] --login
perform discovery based on portal in database
-m discovery --portal=[ip:port] --op=[op] [--name=[name] --value=[value]]
perform specific DB operation [op] for specific
discovery portal. It could be one of:
[new], [delete], [update] or [show]. In case of
[update], you have to provide [name] and [value]
you wish to update
-m node display all discovered nodes from internal
persistent discovery database
-m node --targetname=[name] --portal=[ip:port] [--login|--logout]
-m node --targetname=[name] --portal=[ip:port] --op=[op] [--name=[name] \
--value=[value]]
perform specific DB operation [op] for specific
portal on target. It could be one of:
[new], [delete], [update] or [show]. In case of
[update], you have to provide [name] and [value]
you wish to update
-m node --logoutall=[all,manual,automatic]
Logout "all" the running sessions or just the ones
with a node or conn startup value manual or automatic.
Nodes marked as ONBOOT are skipped.
-m node --loginall=[all,manual,automatic]
Login "all" the running sessions or just the ones
with a node or conn startup value manual or automatic.
Nodes marked as ONBOOT are skipped.
-m session display all active sessions and connections
-m session --sid=[sid] [--info | --rescan | --logout ]
--op=[op] [--name=[name
展开阅读全文