With iSCSI you conifgure a block device to be exported over the network. LUN is just SCSI term for identifying a drive.
So what I did is just had a simple software RAID 5 array that I divided up using logical volume management. I'd create a logical volume to be used for a VM then configured iSCSI enterprise target to use the logical volume as a drive that gets exported over the network.
With Xen I'd then use the Linux kernel's iSCSI support on my desktop to access it and then use one of those as a raw device for each guest VM.
With KVM and virt-manager stuff they have it setup so that the VM can be configured to use iSCSI directly. I have not tried it with KVM yet.
This sort of thing is important if your going to use VMs for business or whatever and want to take advantage of the "Live migration" features. One of the requirements are that you have a common storage backend so that the VM has consistant access to it's storage after the move.
Then, for reliability, you'd have to take advantage of other features like Ethernet bonding and Linux multi-path and maybe DRBD or other storage replication features so that you'd have the ability to replicate storage and create highly reliable storage networks. Although all the details are beyond me, I've done research but no actual implimentation for stuff like that.
That's one of he kick-ass things about KVM is that you can then more easily take advantage of all the little features, drivers, and hardware support that have been developed for Linux server use in the enterprise.
Oh, and for these block-level protocols like iSCSI or stuff from Redhat's clustering things, or fiberchannel... the security for these things suck huge donkey balls. Their 'security' features are more for avoiding accidents and not so much for stopping attackers. So for security purposes you'd generally want to use a private network just for the storage. It's good for performance. Of course for more casual uses like home usage it's not that important.