LIO Rocks?
YES!
Would you like to have: SRPT, ISCSI, FC device from local disks?
Then Simply install Ubuntu 12.10, (I know maybe 12.04?),
apt-get upgrade
apt-get install targetcli
Dive in:
targetcli
>ls
Welcome to the targetcli shell:
Copyright (c) 2011 by RisingTide Systems LLC.
Visit us at http://www.risingtidesystems.com.
Using ib_srpt fabric module.
Using loopback fabric module.
Using qla2xxx fabric module.
Using iscsi fabric module.
Using tcm_fc fabric module.
/> list
Command not found list
/> ls
o- / ..................................................................... [...]
o- backstores .......................................................... [...]
| o- fileio ............................................... [0 Storage Object]
| o- iblock ............................................... [1 Storage Object]
| | o- araid63fc .......................... [/dev/vg_araif063/lvol0 activated]
| o- pscsi ................................................ [0 Storage Object]
| o- rd_dr ................................................ [0 Storage Object]
| o- rd_mcp ............................................... [0 Storage Object]
o- ib_srpt ........................................................ [0 Target]
o- iscsi .......................................................... [0 Target]
o- loopback ....................................................... [0 Target]
o- qla2xxx ........................................................ [1 Target]
| o- 21:00:00:24:ff:04:6e:84 ....................................... [enabled]
| o- acls .......................................................... [1 ACL]
| | o- 21:00:00:24:ff:04:6e:fa .............................. [1 Mapped LUN]
| | o- mapped_lun0 ........................................... [lun0 (rw)]
| o- luns .......................................................... [1 LUN]
| o- lun0 .................... [iblock/araid63fc (/dev/vg_araif063/lvol0)]
o- tcm_fc ......................................................... [0 Target]
Mittwoch, 27. März 2013
Mittwoch, 23. Januar 2013
Upgrade your running Linux to SL 6.3 headless and medialess
Installing new running linux without going in to cluster room:)
Without PXE and CD/DVD or USB.
On the host with old linux.
cd /boot
update Grub:
Add following:
If your remote machine does not use a static IP address but uses DHCP instead, the recipe is a little different. In the case of DHCP you can leave off the netmask, gateway, and dns parameters and just replace "ip=n.n.n.n" with "ip=dhcp".
The full list of parameters:
Without PXE and CD/DVD or USB.
On the host with old linux.
cd /boot
curl http://ftp1.scientificlinux.org/linux/scientific/6.3/x86_64/os/images/pxeboot/vmlinuz -o vmlinuz-sl63
curl http://ftp1.scientificlinux.org/linux/scientific/6.3/x86_64/os/images/pxeboot/initrd.img -o initrd-sl63
update Grub:
Add following:
title Remote SL63 OS Install
root (hd0,0)
kernel /vmlinuz-sl63 vnc vncconnect=n.n.n.n headless ip=n.n.n.n
netmask=n.n.n.n gateway=n.n.n.n dns=n.n.n.n hostname=x ksdevice=eth0
method=http://ftp1.scientificlinux.org/linux/scientific/6.3/x86_64/os lang=en_US keymap=us
initrd /initrd-sl63.img
If your remote machine does not use a static IP address but uses DHCP instead, the recipe is a little different. In the case of DHCP you can leave off the netmask, gateway, and dns parameters and just replace "ip=n.n.n.n" with "ip=dhcp".
The full list of parameters:
vnc
vncconnect={IP address of machine where you will run vncviewer in listen mode}
headless
ip={IP address for the remote machine}
netmask={netmask for the remote machine}
gateway={gateway IP address for the remote machine}
dns={IP address for the DNS server}
hostname={desired FQDN}
ksdevice={name of network device}
method={URL to parent directory of images/state2.img}
lang={proper language code}
keymap={proper country code}
Installing SRPT on Scientific Linux 6.3/6.x PART-3
Installing scst from source:
If you does not care about rpms you can install SCST directly to the system, the drawback is really hard to uninstall it.
You should reinstall the kernel, kernel-dev/src as well in order to get rid of SCST drivers.
adduser mockbuild
visudo
Add the line
get the recent version of srpt from http://sourceforge.net/projects/scst/files/srpt/
extract scst to
/home/mockbuild/scst/srpt-2.2.0/
Patch Kernel make file
#!/bin/bash
if [ -e /lib/modules/$(uname -r)/build/scripts/Makefile.lib ]; then cd /lib/modules/$(uname -r)/build; else cd /usr/src/linux-$(uname -r); fi
sudo patch -p1 < /home/mockbuild/scst/srpt-2.2.0/patches/kernel-$(uname -r | sed -e 's|-.*||')-pre-cflags.patch
make scst_clean scst scst_install iscsi iscsi_install
make srpt_install
make -s scstadm scstadm_install
make iscsi iscsi_install scstadm_install scst_install srpt srpt_install
done!
follow part-2 to configure.
If you does not care about rpms you can install SCST directly to the system, the drawback is really hard to uninstall it.
You should reinstall the kernel, kernel-dev/src as well in order to get rid of SCST drivers.
adduser mockbuild
visudo
Add the line
mockbuild ALL=(ALL) NOPASSWD: ALL
su
mockbuild
mkdir ~/scst
get the recent version of srpt from http://sourceforge.net/projects/scst/files/srpt/
extract scst to
/home/mockbuild/scst/srpt-2.2.0/
Patch Kernel make file
#!/bin/bash
if [ -e /lib/modules/$(uname -r)/build/scripts/Makefile.lib ]; then cd /lib/modules/$(uname -r)/build; else cd /usr/src/linux-$(uname -r); fi
sudo patch -p1 < /home/mockbuild/scst/srpt-2.2.0/patches/kernel-$(uname -r | sed -e 's|-.*||')-pre-cflags.patch
make scst_clean scst scst_install iscsi iscsi_install
make srpt_install
make -s scstadm scstadm_install
make iscsi iscsi_install scstadm_install scst_install srpt srpt_install
done!
follow part-2 to configure.
Installing SRPT on Scientific Linux 6.3/6.x PART-1
These all instructions are assuming that you have a machines with IB cards connected with IB switch.
My tests are done with Mellanox DDR(ConnectX-2-20Gb/s) and QDR(ConnectX-3-40Gb/s) cards
Assuming you have on machineA blockdevice/RAID6 on /dev/vg00/lv_01 and you wont to export it to machineB as a block device/scsi device.
Why do we need it?
Situation1: We bot a bunch of RAID boxes each with 24HDDs very fast on local IO, can I use their full power on MS SQL or MySQL database? Can I combine them in to one striped LVM on DB server?
Situation2: We have a server with huge RAID in LVM and we are running out of space.
Can we expand LVM with another raid without plugging yet another physical raid?
Can we expand LVM without any downtime of server? Yes we can if we will bring the new empty raid over the Infiniband to the server.
There are several solutions for this: ISCSI(TCP/IP)-slow, ISER(RDMA)-fast, SRP(RDMA)-fastest-almost no overhead.
Yes we can export block device with ISCSI protocol. One can export block device from Linux machine to Windows machine using TCP/IP, or better to use ip-over-ib to have 300MB/s prformance. On concurrent and heavy loads ISCSI uses a lot of CPU and performance in some cases is only 10% of original RAID performance. Due to the overhead of ISCSI protocol I never got more than 350MB/s.
There is another more advanced protocol ISER which is ISCSI over RDMA. This one brings more performance from my personal tests it is about 30-40% of original RAID. Unfortunately on heavy loads the kernel panic is accruing with unknown reasons. ISER performance was about 400~420 MB/s
Reading the white paper "Building a Scalable Storage with InfiniBand" from Mellanox was motiating me to look in SRP protocol.
Let us install SRPT and mount it on another machine.
Assuming machineA is a target and machineB is an initiator.
In normal language machineA is exporting the block device and machineB is a client.
There are several steps to get running whole machinery.
First of all, do the following in the clean SL 6.3 installation yum -y update && yum -y upgrade.
The next step depend on your IB card, if IB card is working with RDMA then you don't need OFED at all.
On my some machines I have QDR cards(Mellanox Technologies MT27500 Family [ConnectX-3]) they are ok with kernels 2.6.32-279.x.
But another card DDR with dual port does not work without OFED.
Getting RDMA running without and with OFED:
1) Without OFED.
In order to test your IB installation you can test RDMA connection: the utils are coming with librdmacm-utils (it contains some examples how to use cm library)
machineA:> rdma_server
then on server 2:
machineB> rdma_client -s serverA.ibIP
The output should not contain negative results:
rdma_client: start
rdma_server: end 0
rdma_client: end 0
This looks nice.
2) With OFED we need to be careful, It brings broken SRPT and we will not be able to compile SCST properly.
2.1) in the OFED folder: ./configure -c myofed.conf
the myofed.conf you can generate like:
./install.pl -p
it will generate ofed-all.conf, rename it to myofed.conf
Edit that file and put and remove all other stuff what you dont require like mpi or sdp, etc...:
nfsrdma=n
rnfs-utils=n
srp=n
srpt=n
iser=n
I put here nfs as well,so we are not going to run buggy nfs-rdma on srpt server. The iser brings only 30% of original RAID performance, so we say no to iser/iscsi protocol.
For using SCST we need to compile that from source
So let me explain the scripts for compiling:
WARNING: We are going to compile Target part, after this point any kernel update will break the SRPT integrity.
On machine where you are going to compile code:
adduser mockbuild
visudo
Add the line
We need to set up our rpmbuild environment
wget http://downloads.sourceforge.net/project/scst/srpt/2.2.0/srpt-2.2.0.tar.bz2
wget http://downloads.sourceforge.net/project/scst/scst/2.2.0/scst-2.2.0.tar.bz2
wget http://downloads.sourceforge.net/project/scst/scstadmin/2.2.0/scstadmin-2.2.0.tar.bz2
wget http://downloads.sourceforge.net/project/scst/iscsi-scst/2.2.0/iscsi-scst-2.2.0.tar.bz2
cp ./srpt-2.2.0.tar.bz2 ~/scst
wget -O ~/rpmbuild/SPECS/scst.spec.newlines http://pastebin.com/raw.php?i=HLMskJKK
cd ~/rpmbuild/SPECS
tr -d '\r' < scst.spec.newlines > scst.spec
rm scst.spec.newlines
cd ~/scst
tar jxvf srpt-2.2.0.tar.bz2
#Patch the kernel, you can put this into shell script then execute later
if [ -e /lib/modules/$(uname -r)/build/scripts/Makefile.lib ]; then cd /lib/modules/$(uname -r)/build; else cd /usr/src/linux-$(uname -r); fi
sudo patch -p1 < /home/mockbuild/scst/srpt-2.2.0/patches/kernel-$(uname -r | sed -e 's|-.*||')-pre-cflags.patch
#Compile and build RPMS
cd ~/rpmbuild/SPECS
rpmbuild -ba scst.spec
If you see error during the compilation please follow the advice which will tell you remove ib modules from the kernel path.
WARNING: Before installation remove /lib/modules/$(uname -r)/updates/drivers/infiniband/ulp/srpt if exist!!
Almost done.
Let us install rpms on the target machine:
copy rpms from ~mockbuild/rpmbuild/RPMS/..etc
rpm -Uhv ~/kernel-module-scst-core-2.6.32-???.el6-2.2.0-1.ab.x86_64.rpm
rpm -Uhv ~/kernel-module-scst-srpt-2.6.32-???.el6-2.2.0-1.ab.x86_64.rpm
rpm -Uhv ~/scst-tools-2.6.32-220.4.2.el6-???-1.ab.x86_64.rpm
# regenerate dependency of modules
depmod -aq $(uname -r)
Done!
In the next post I will show how to configure scst.
Thanks to Uwe Sauter from HLRS to initiating to publish it.
The credits are going to: http://sonicostech.wordpress.com/2012/02/23/howto-infiniband-srp-target-on-centos-6-incl-rpm-spec/ for detailed explanation which was working for me.
My tests are done with Mellanox DDR(ConnectX-2-20Gb/s) and QDR(ConnectX-3-40Gb/s) cards
Assuming you have on machineA blockdevice/RAID6 on /dev/vg00/lv_01 and you wont to export it to machineB as a block device/scsi device.
Why do we need it?
Situation1: We bot a bunch of RAID boxes each with 24HDDs very fast on local IO, can I use their full power on MS SQL or MySQL database? Can I combine them in to one striped LVM on DB server?
Situation2: We have a server with huge RAID in LVM and we are running out of space.
Can we expand LVM with another raid without plugging yet another physical raid?
Can we expand LVM without any downtime of server? Yes we can if we will bring the new empty raid over the Infiniband to the server.
There are several solutions for this: ISCSI(TCP/IP)-slow, ISER(RDMA)-fast, SRP(RDMA)-fastest-almost no overhead.
Yes we can export block device with ISCSI protocol. One can export block device from Linux machine to Windows machine using TCP/IP, or better to use ip-over-ib to have 300MB/s prformance. On concurrent and heavy loads ISCSI uses a lot of CPU and performance in some cases is only 10% of original RAID performance. Due to the overhead of ISCSI protocol I never got more than 350MB/s.
There is another more advanced protocol ISER which is ISCSI over RDMA. This one brings more performance from my personal tests it is about 30-40% of original RAID. Unfortunately on heavy loads the kernel panic is accruing with unknown reasons. ISER performance was about 400~420 MB/s
Reading the white paper "Building a Scalable Storage with InfiniBand" from Mellanox was motiating me to look in SRP protocol.
Let us install SRPT and mount it on another machine.
Assuming machineA is a target and machineB is an initiator.
In normal language machineA is exporting the block device and machineB is a client.
There are several steps to get running whole machinery.
First of all, do the following in the clean SL 6.3 installation yum -y update && yum -y upgrade.
The next step depend on your IB card, if IB card is working with RDMA then you don't need OFED at all.
On my some machines I have QDR cards(Mellanox Technologies MT27500 Family [ConnectX-3]) they are ok with kernels 2.6.32-279.x.
But another card DDR with dual port does not work without OFED.
Getting RDMA running without and with OFED:
1) Without OFED.
In order to test your IB installation you can test RDMA connection: the utils are coming with librdmacm-utils (it contains some examples how to use cm library)
machineA:> rdma_server
then on server 2:
machineB> rdma_client -s serverA.ibIP
The output should not contain negative results:
rdma_client: start
rdma_server: end 0
rdma_client: end 0
This looks nice.
2) With OFED we need to be careful, It brings broken SRPT and we will not be able to compile SCST properly.
2.1) in the OFED folder: ./configure -c myofed.conf
the myofed.conf you can generate like:
./install.pl -p
it will generate ofed-all.conf, rename it to myofed.conf
Edit that file and put and remove all other stuff what you dont require like mpi or sdp, etc...:
nfsrdma=n
rnfs-utils=n
srp=n
srpt=n
iser=n
I put here nfs as well,so we are not going to run buggy nfs-rdma on srpt server. The iser brings only 30% of original RAID performance, so we say no to iser/iscsi protocol.
For using SCST we need to compile that from source
So let me explain the scripts for compiling:
WARNING: We are going to compile Target part, after this point any kernel update will break the SRPT integrity.
On machine where you are going to compile code:
adduser mockbuild
visudo
Add the line
mockbuild ALL=(ALL) NOPASSWD: ALL
We need to set up our rpmbuild environment
mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}Time to get some other directories prepared…
echo “%_topdir %(echo $HOME)/rpmbuild
%_smp_mflags -j3
%_signature gpg
%_gpg_name Your Name (Your Title) ” > .rpmmacros
mkdir ~/scstcd ~/rpmbuild/SOURCES
mkdir ~/mlx
wget http://downloads.sourceforge.net/project/scst/srpt/2.2.0/srpt-2.2.0.tar.bz2
wget http://downloads.sourceforge.net/project/scst/scst/2.2.0/scst-2.2.0.tar.bz2
wget http://downloads.sourceforge.net/project/scst/scstadmin/2.2.0/scstadmin-2.2.0.tar.bz2
wget http://downloads.sourceforge.net/project/scst/iscsi-scst/2.2.0/iscsi-scst-2.2.0.tar.bz2
cp ./srpt-2.2.0.tar.bz2 ~/scst
wget -O ~/rpmbuild/SPECS/scst.spec.newlines http://pastebin.com/raw.php?i=HLMskJKK
cd ~/rpmbuild/SPECS
tr -d '\r' < scst.spec.newlines > scst.spec
rm scst.spec.newlines
cd ~/scst
tar jxvf srpt-2.2.0.tar.bz2
#Patch the kernel, you can put this into shell script then execute later
if [ -e /lib/modules/$(uname -r)/build/scripts/Makefile.lib ]; then cd /lib/modules/$(uname -r)/build; else cd /usr/src/linux-$(uname -r); fi
sudo patch -p1 < /home/mockbuild/scst/srpt-2.2.0/patches/kernel-$(uname -r | sed -e 's|-.*||')-pre-cflags.patch
#Compile and build RPMS
cd ~/rpmbuild/SPECS
rpmbuild -ba scst.spec
If you see error during the compilation please follow the advice which will tell you remove ib modules from the kernel path.
WARNING: Before installation remove /lib/modules/$(uname -r)/updates/drivers/infiniband/ulp/srpt if exist!!
Almost done.
Let us install rpms on the target machine:
copy rpms from ~mockbuild/rpmbuild/RPMS/..etc
rpm -Uhv ~/kernel-module-scst-core-2.6.32-???.el6-2.2.0-1.ab.x86_64.rpm
rpm -Uhv ~/kernel-module-scst-srpt-2.6.32-???.el6-2.2.0-1.ab.x86_64.rpm
rpm -Uhv ~/scst-tools-2.6.32-220.4.2.el6-???-1.ab.x86_64.rpm
# regenerate dependency of modules
depmod -aq $(uname -r)
Done!
In the next post I will show how to configure scst.
Thanks to Uwe Sauter from HLRS to initiating to publish it.
The credits are going to: http://sonicostech.wordpress.com/2012/02/23/howto-infiniband-srp-target-on-centos-6-incl-rpm-spec/ for detailed explanation which was working for me.
Installing SRPT on Scientific Linux 6.3/6.x PART-2
Configuring SCST.
For installing SCST please look the previous post.
Configuration:
Assuming you have on machineA blockdevice/RAID6 on /dev/vg00/lv_01 and you wont to export it to machineB as a block device/scsi device.
1) vim /etc/modprobe.d/ib_srpt.conf
options ib_srpt use_node_guid_in_target_name=1
2) add module for auto load in:
/etc/init.d/scst about line 101
SCST_MODULES="scst ib_srpt"
/etc/init.d/scst restart
WARNING: if you get error loading kernel module and unknown symbols, then the kernel loads wrong ib_srpt module, find that and delete it.
3) Let us export something:
Now let’s find out what targets we have available to scstadmin
scstadmin --list_target
you will ge something like:
Driver Target
---------------------------
ib_srpt 0002:c903:0018:4c40
each IB card has a unique GUID
for management please install if you need it
yum install srptools
assuming you have on machineA /dev/vg00/lv_01 and you wont to export it to machineB
machineA is a target machineB is a initiator. In normal language machineA is exporting the block device and machineB is a client.
1) in machineA edit /etc/scst.conf
############
HANDLER vdisk_blockio {
DEVICE disk01 {
filename /dev/vg00/lv_01
threads_num 1
nv_cache 1
write_through 1
}
}
TARGET_DRIVER ib_srpt {
TARGET 0002:c903:0018:4c40 {
LUN 0 disk01
enabled 1
}
############
Warning: do not forget to restart scst
/etc/init.d/scst restart
Please replace TARGET to your IB GUID (the result from scstadmin --list_target on machineA)
2) on machineB
modprobe ib_srp
if no errors you should get device in:
/sys/class/infiniband_srp/srp-/add_target
ibsrpdm -c will print list of exported targets visible in the IB network.
I get in my machine something like this:
id_ext=0002c90300184c40,ioc_guid=0002c90300184c40,dgid=fe800000000000000002c90300184c41,pkey=ffff,service_id=0002c90300184c40
to mount it you should:
then lsscsi will show new scsi device:
[7:0:0:0] disk SCST_BIO disk01 300 /dev/sdc
So that's it.
If you need more automated mounts of targets, you can use multipathd with srp_daemon combination.
But this is another story.
For installing SCST please look the previous post.
Configuration:
Assuming you have on machineA blockdevice/RAID6 on /dev/vg00/lv_01 and you wont to export it to machineB as a block device/scsi device.
1) vim /etc/modprobe.d/ib_srpt.conf
options ib_srpt use_node_guid_in_target_name=1
2) add module for auto load in:
/etc/init.d/scst about line 101
SCST_MODULES="scst ib_srpt"
/etc/init.d/scst restart
WARNING: if you get error loading kernel module and unknown symbols, then the kernel loads wrong ib_srpt module, find that and delete it.
3) Let us export something:
Now let’s find out what targets we have available to scstadmin
scstadmin --list_target
you will ge something like:
Driver Target
---------------------------
ib_srpt 0002:c903:0018:4c40
each IB card has a unique GUID
for management please install if you need it
yum install srptools
assuming you have on machineA /dev/vg00/lv_01 and you wont to export it to machineB
machineA is a target machineB is a initiator. In normal language machineA is exporting the block device and machineB is a client.
1) in machineA edit /etc/scst.conf
############
HANDLER vdisk_blockio {
DEVICE disk01 {
filename /dev/vg00/lv_01
threads_num 1
nv_cache 1
write_through 1
}
}
TARGET_DRIVER ib_srpt {
TARGET 0002:c903:0018:4c40 {
LUN 0 disk01
enabled 1
}
############
Warning: do not forget to restart scst
/etc/init.d/scst restart
Please replace TARGET to your IB GUID (the result from scstadmin --list_target on machineA)
2) on machineB
modprobe ib_srp
if no errors you should get device in:
/sys/class/infiniband_srp/srp
ibsrpdm -c will print list of exported targets visible in the IB network.
I get in my machine something like this:
id_ext=0002c90300184c40,ioc_guid=0002c90300184c40,dgid=fe800000000000000002c90300184c41,pkey=ffff,service_id=0002c90300184c40
to mount it you should:
echo "id_ext=0002c90300184c40,ioc_guid=0002c90300184c40,dgid=fe800000000000000002c90300184c41,pkey=ffff,service_id=0002c90300184c40" >>/sys/class/infiniband_srp/srp-mlx4_0-1/add_target
then lsscsi will show new scsi device:
[7:0:0:0] disk SCST_BIO disk01 300 /dev/sdc
So that's it.
If you need more automated mounts of targets, you can use multipathd with srp_daemon combination.
But this is another story.
Freitag, 11. Januar 2013
Finally Latex Editor and Compiler supports OC 4.5.5
Finally Latex Editor and Compiler supports OwnCloud >4.5.5
just released:
just released:
Samstag, 5. Januar 2013
[ERROR] Program launch failed. Unable to locate Java VM. Please set JAVA_HOME environment variable.
TiStudio win7 64bit Trouble!!!
I got some errors on building an Android app:
SO in order to solve it just Install JAVA 32 bit!!! in cmd one need to check if java -version shows following:[ERROR] Program launch failed. Unable to locate Java VM. Please set JAVA_HOME environment variable.
Another trouble is TiStudio does not like when software is in the folders with space in the name.
C:\>java -version
java version "1.7.0_10"
Java(TM) SE Runtime Environment (build 1.7.0_10-b18)
Java HotSpot(TM) Client VM (build 23.6-b04, mixed mode, sharing)
I was moving all stuff to C:\ANDROID\java\i586,C:\ANDROID\android-sdk,
Finally it works without trouble.
Abonnieren
Posts (Atom)