Announcement

Collapse
No announcement yet.

Linux Device Mapper Adding An "Emulated Block Size" Target

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux Device Mapper Adding An "Emulated Block Size" Target

    Phoronix: Linux Device Mapper Adding An "Emulated Block Size" Target

    A new target for Linux's Device Mapper is EBS, the Emulated Block Size...

    http://www.phoronix.com/scan.php?pag...-DM-EBS-Target

  • #2
    So can someone explain to me what this is for exactly? I'm trying to understand how running 512 block sizes on 4K sectors is actually beneficial? Noob question I'm sure, but I figure if Redhat added it, there is some business need/want I'm unaware of.

    Comment


    • #3
      Originally posted by Darksurf View Post
      So can someone explain to me what this is for exactly? I'm trying to understand how running 512 block sizes on 4K sectors is actually beneficial? Noob question I'm sure, but I figure if Redhat added it, there is some business need/want I'm unaware of.
      it is beneficial as in: making it compatible and work at all, ...

      Comment


      • #4
        Originally posted by Darksurf View Post
        So can someone explain to me what this is for exactly? I'm trying to understand how running 512 block sizes on 4K sectors is actually beneficial? Noob question I'm sure, but I figure if Redhat added it, there is some business need/want I'm unaware of.
        Originally posted by phoronix View Post
        for dealing with software that isn't optimized for 4K sectors
        But I imagine Red Hat is doing it because Stratis will need something like this to deal with pools when a drive is upgraded.

        I say that because that's an issue that ZFS has and I'm wondering if ZFS will be able to leverage this too.

        Comment


        • #5
          So basically this is for backward compat with older drives in a pool? It's not that the OS/filesystem/or software needs it so long as all the drives were native 4K drives?

          Comment


          • #6
            Originally posted by Darksurf View Post
            So basically this is for backward compat with older drives in a pool? It's not that the OS/filesystem/or software needs it so long as all the drives were native 4K drives?
            That's my guess since

            Code:
            +config DM_EBS
            +   tristate "Emulated block size target (EXPERIMENTAL)"
            +   depends on BLK_DEV_DM
            +   select DM_BUFIO
            +   help
            +     dm-ebs emulates smaller logical block size on backing devices
            +     with larger ones (e.g. 512 byte sectors on 4K native disks).

            Comment


            • #7
              It seems to me that the effort to do this would be better spent fixing whatever software is requiring 512 byte blocks.

              If that can't be done, then I guess it must be something proprietary and using direct IO. Some old version of Oracle maybe.

              Comment


              • #8
                Originally posted by skeevy420 View Post
                That's my guess since
                it would be really stupid idea. you can easily read 8 sectors at once from 512b sector device, subj is for old software which requires 512b sectors(since you can't just read 1/8th of a sector from 4kb sector drive)

                Comment


                • #9
                  Originally posted by Darksurf View Post
                  So can someone explain to me what this is for exactly? I'm trying to understand how running 512 block sizes on 4K sectors is actually beneficial? Noob question I'm sure, but I figure if Redhat added it, there is some business need/want I'm unaware of.
                  I'm guessing (guessing) that it has to do with some specific HP mainframes.

                  Red Hat is a funny company. They made a demo for us, to showcase their "Satellite" technology. I'll spare some details to protect collateral damage. Anyway, this wasn't some small demo. There were about 70 people present. They were telling us how we could manage updates to server farms. They first sold us on how simple and robust their solution was. Then came the demo. Nothing worked. They had create a virtual cluster of five servers, and told us they were going to send an update. Nothing worked. One failure after another. Catastrophic failures. Then they assured us that it the real world we would never encounter these problems. We had plenty of free beers and chicken wings, and I even got a free tshirt. But I left with TMI. Nothing good has ever, nor will ever, come out from Red Hat.

                  Look at all the problems the linux ecosystem has, everywhere it's broken beyond recognition, and you'll find Red Hat is behind it.

                  Comment


                  • #10
                    Originally posted by Zan Lynx View Post
                    It seems to me that the effort to do this would be better spent fixing whatever software is requiring 512 byte blocks.

                    If that can't be done, then I guess it must be something proprietary and using direct IO. Some old version of Oracle maybe.

                    I can see uses for it. Think you have imaged a disc that file system was installed expecting the drive to be 512 bytes and the drive you are now using is 4k. This would be a data recovery/transfer usage. Of course I would like to see this for SMR drives and it will need hell lot larger block size.

                    Virtual machines images are another likely thing to have been created 512 and are now on a 4kb drive asking to-do things that don't exactly fit any more.

                    Its not always just update the software its a lot more tricky when it update a complete disc image to be formatted different in a hurry just to be drive compatible..

                    Comment

                    Working...
                    X