OK, so LUN could be "/vol/iscsivol/tesztlun0" but is never exposed to the client?
Ya pretty much like that.
As you know iSCSI is just SCSI commands encapsulated in TCP packets.
The 'server' portion of iSCSI is called the 'iSCSI target' and the 'client' is called the 'iSCSI initator'. Originally it was intended that you'd just slap some SCSI drives into a network adapter box and then computers can access the drives over the network with their own hardware adapters.
But you can get software targets and initators, also. So I used a software 'iSCSI Enterprise Target' for the server portion and then the built-in iscsi initator in the Linux kernel.
And Linux being Linux any block device can be used as a drive. A drive partition, a USB flash drive, a file-backed loop device, logical volume, etc etc. It's all the same, more or less.
Using the iSCSI protocol then I can just export any block device I feel like over the network.
So ya then it shows up as /dev/whatever. Its been a long long time since I did this. Years and years, so I don't remember all the details and I am sure that now with UDEV and stuff it's changed.
So for Xen it would be the same as setting up a harddrive partition for you to use. All the differences and network details are abstracted away from anything to do with the VM or whatever.
-------------------------------
If you go and use virt-manager you'd see that when setting up a hardware for a VM it does have iSCSI support. I don't know if it uses it directly and lets the VM use it directly as a SCSI device or if it mounts it as a block device in the host system to be used as a generic drive or whatever. I haven't tried that out yet.
------------------------------
There are other Network-Block protocols that Linux supports... Like:
* ATA over Ethernet (like iSCSI, but instead of SCSI commands in TCP, they are using ATA command in ethernet frames.. it should have less overhead but iSCSI is more mature and has better support and ends up being faster)
* NBD -- network block device
* GNBD -- GFS network block devices... this is supplied as part of Redhat's GFSv1/v2 cluster-aware file system.
iSCSI is nice because it's standard and lots of different devices and OSes support it.
----------------------------
Just keep in mind that if you want to use iSCSI to real systems (not VMs) from network you'll need a local harddrive still for swap devices. There are nasty deadlocks and race conditions assocated with booting from the network and running out of RAM.. (you need memory to read data from the network, but you need data off the network for storage, but you need the storage for swap because your out of ram... etc) So having local drive for swap solves that.
And if you want to have multiple OSes access the same file systems on the same block device at the same time you'll need a cluster-aware file system like OCFSv2 or GFS. (OCFSv2 is in the Linux kernel right now. GFS is part of Redhat's clustering package, along with GNBD and CLVM (cluster logical volume management). That way they coordinate file locking and stuff like that so they don't accedently corrupt the file system by stepping on each other's toes.
----------------------------
If you want to play around or use iSCSI or other things like that the best and easiest way may be to use OpenFiler.
It's http://www.openfiler.com.
Very clever,, very nice to use.
Just get your old PC, get a few 1TB drives, slap them in, and use Openfiler to configure them into RAID 10 and you'd have a very kick-ass network storage for holding dozens and dozens of VMs.
Very nice. You should be able to get very close to native performance using that. With Jumbo packets, decent hardware, and nice tuning you should be able to get about 60-80 MB/s read/write speeds using gigabit ethernet for the host system. Of course the guest systems have limitations based on the VM technology.
Leave a comment: