Announcement

Collapse
No announcement yet.

Linux Sound Subsystem Begins Cleaning Up Its Terminology To Meet Inclusive Guidelines

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Djhg2000 View Post
    (As an aside I don't think master/slave should be capitalized, they're regular words just like any other.)
    To be correct when writing into standard and it a mandatory directive historic documents like X11 standard mandated capitalisation of first letter. Yes it a convention we forget today but that why when I was talking about X11 I typed it that way.

    Originally posted by Djhg2000 View Post
    If it referred to hats then it clearly didn't refer to skin color and as such is categorically not racist, right?
    Exactly the history of the computer origin of blacklist/Whitelist and Greylist that is provable by code comments has no racist link at all.

    Originally posted by Djhg2000 View Post
    At some point you just can't be bothered to click "next page", even though you should in order to find the neutral results with low click scoring.
    Yes true and this lead to debating this stuff without the base knowledge.

    Originally posted by Djhg2000 View Post
    That's actually some neat trivia, I appreciate your dedication to doing things right as a prop person. So the idea was that the ink would flow around the protruding text? I must admit I'm more interested in the actual physical blacklist now than the blacklist/whitelist debate we were having.
    Oil and water based inks will not go though wax so its not exactly flowing around the wax it that the wax has water/oil proofed the paper at that point. The excess wax will come off the document once you handle it even so the hot wax straight from a candle percent of it soaked into the paper. So wax soaks into paper like ink so it just a whitish ink that before excess comes off is normally a slightly different colour to the paper so is in fact see-able.

    Originally posted by Djhg2000 View Post
    Technically correct but no accountant in their right mind would use wax and lots of ink.
    The inverted font made for the Gutenberg press disputes that point as it was first made for mass printing accounting documents of the time. There are a few different ways the old documents say to doit.

    The blotter is the expensive version. The cheaper version is write in wax then basically scribble over with your normal quill . This also leads to numbers being written strikethrough being either negative or incorrect. Reason why you need to initial anything your crossed out as wrong in accountancy. Early computer accountancy programs also used either inverted font or strike-though for negative that is just a continuation from 1400 to fairly modern day(yes inside 30 years of current day). .

    Originally posted by Djhg2000 View Post
    As luck would have it, I came across an example of when this change gets destructive just a few hours ago. Here is the diff for the latest libvirt-daemon-system config update in Debian:
    Code:
    --- /etc/libvirt/qemu.conf 2020-07-13 19:38:22.981996824 +0200
    +++ /etc/libvirt/qemu.conf.dpkg-new 2020-07-27 22:50:08.000000000 +0200
    @@ -464,7 +464,7 @@
    # What cgroup controllers to make use of with QEMU guests
    #
    # - 'cpu' - use for scheduler tunables
    -# - 'devices' - use for device whitelisting
    +# - 'devices' - use for device access control
    # - 'memory' - use for memory tunables
    # - 'blkio' - use for block devices I/O tunables
    # - 'cpuset' - use for CPUs and memory nodes
    Note how it's no longer clear if the devices here are allowed or denied access. It's barely even clear if these devices control the access or the access of the devices are controlled. All of which was summarized with a single word before this change.
    This one you should have done more homework because its a great example of personal bias because you have not done your homework. This is perfect example of Whitelist being in fact wrong now but was kind of right a long time back. The correction there is in fact right but the documentation in Linux kernel if you know what to read.


    Lets start with when it was kind of right to use "use for device whitelisting".
    https://www.kernel.org/doc/html/late...1/devices.html

    Code:
    [B]Device Whitelist Controller[/B]
    
    
    [B]1. Description[/B]
    
    Implement a cgroup to track and enforce open and mknod restrictions on device files. A device cgroup associates a device access whitelist with each cgroup. A whitelist entry has 4 fields. ‘type’ is a (all), c (char), or b (block). ‘all’ means it applies to all types and all major and minor numbers. Major and minor are either an integer or * for all. Access is a composition of r (read), w (write), and m (mknod).
    The root device cgroup starts with rwm to ‘all’. A child device cgroup gets a copy of the parent. Administrators can then remove devices from the whitelist or add new entries. A child cgroup can never receive a device access which is denied by its parent.
    
    
    [B]2. User Interface[/B]
    
    An entry is added using devices.allow, and removed using devices.deny. For instance:
    
    echo 'c 1:3 mr' > /sys/fs/cgroup/1/devices.allow
    
    allows cgroup 1 to read and mknod the device usually known as /dev/null. Doing:
    
    echo a > /sys/fs/cgroup/1/devices.deny
    
    will remove the default ‘a [I]:[/I] rwm’ entry. Doing:
    
    echo a > /sys/fs/cgroup/1/devices.allow
    
    will add the ‘a [I]:[/I] rwm’ entry to the whitelist.
    The basic description at the start looks fine do note this is cgroupv1. Do notice you have allow and deny lists inside this so there is already signs that this may not stay a pure whitelist implementation there is one particular problem sentence that is going to trigger the change.
    Code:
    A child cgroup can never receive a device access which is denied by its parent.
    Do you really want to have to list every single device in the system to prevent a container from access only 1 particular device. So this is going to cause a redesign at some point..

    Now lets look at the cgroup v2 where we now find something different that means "use for device access control" is correct and "use for device whitelisting" is now wrong and the weakness has now been corrected.
    https://www.kernel.org/doc/html/late...cgroup-v2.html
    Code:
    [B]Device controller[/B]
    
    Device controller manages access to device files. It includes both creation of new device files (using mknod), and access to the existing device files.
    Cgroup v2 device controller has no interface files and is implemented on top of cgroup BPF. To control access to device files, a user may create bpf programs of the BPF_CGROUP_DEVICE type and attach them to cgroups. On an attempt to access a device file, corresponding BPF programs will be executed, and depending on the return value the attempt will succeed or fail with -EPERM.
    A BPF_CGROUP_DEVICE program takes a pointer to the bpf_cgroup_dev_ctx structure, which describes the device access attempt: access type (mknod/read/write) and device (type, major and minor numbers). If the program returns 0, the attempt fails with -EPERM, otherwise it succeeds.
    An example of BPF_CGROUP_DEVICE program may be found in the kernel source tree in the tools/testing/selftests/bpf/dev_cgroup.c file.
    Cgroup v2 in fact allows directly to have functional deny list(blacklist) instead of allow list(whitelist) and in fact allows you to have a filter list(greylist) that is allows asking user if access to X device is allowed by the BPF program interfacing with userspace program.

    So yes if you only support cgroup v1 using the "'devices' - use for device whitelisting" is kind of right. But once you start supporting cgroup v2 it is "device access control" since now you can have fully functional allow/deny/filter lists that you create by converting what ever information there into a BPF program.

    So you have just found example of where whitelist as a term was right in past and due to Linux kernel changes expanding features is now completely wrong. Device filtering in cgroups v2 is no longer limited to whitelist methods. The BPF method also means something can change from allowed to deny to filter on the fly as well.

    So not all the cases of removal of blacklist or whitelist from Linux kernel and supporting programs has anything todo with political correctness sometimes it simply that the feature has expand that those terms don't work at all any more and that is exactly what you just pointed to. Libvirt with qemu talking about cgroup controller around qemu the term whitelist does not fit any more because they are upgrading to support cgroupv2 where things are different.

    Comment


    • Originally posted by oiaohm View Post
      To be correct when writing into standard and it a mandatory directive historic documents like X11 standard mandated capitalisation of first letter. Yes it a convention we forget today but that why when I was talking about X11 I typed it that way.
      This only applies to the specific instances though. I'm talking about master/slave as generic terms, not in reference to any specific subset of software.

      Originally posted by oiaohm View Post
      Exactly the history of the computer origin of blacklist/Whitelist and Greylist that is provable by code comments has no racist link at all.
      So your whole objection on the matter of using blacklist/whitelist is based on the conflict of conventions with accounting, correct?

      Originally posted by oiaohm View Post
      Yes true and this lead to debating this stuff without the base knowledge.
      I'd argue this is a much bigger issue than the terminology nitpicking we're dealing with in this thread, especially as it affects everything we use search engines for. It's mind boggling how much control we've surrendered to algorithms, and the inherent monoculture which all click scoring systems inevitably result in. But I'll leave it at that and get back to the topic at hand before I go on yet another rant about this.

      Originally posted by oiaohm View Post
      Oil and water based inks will not go though wax so its not exactly flowing around the wax it that the wax has water/oil proofed the paper at that point. The excess wax will come off the document once you handle it even so the hot wax straight from a candle percent of it soaked into the paper. So wax soaks into paper like ink so it just a whitish ink that before excess comes off is normally a slightly different colour to the paper so is in fact see-able.
      That's a pretty neat way of inverting handwriting on a paper.

      Originally posted by oiaohm View Post
      The inverted font made for the Gutenberg press disputes that point as it was first made for mass printing accounting documents of the time. There are a few different ways the old documents say to doit.

      The blotter is the expensive version. The cheaper version is write in wax then basically scribble over with your normal quill . This also leads to numbers being written strikethrough being either negative or incorrect. Reason why you need to initial anything your crossed out as wrong in accountancy. Early computer accountancy programs also used either inverted font or strike-though for negative that is just a continuation from 1400 to fairly modern day(yes inside 30 years of current day). .
      To me that makes little sense. Why would accountants use inverted fonts aside from exceptions when printing documents? The extra ink was, and still is, an extra expense no matter how you look at it.

      Using strikethrough is something I've heard of before but as I recall it's because even errors need to be present for the purpose of performing audits. That said I've never done any serious accounting nor delved too deeply into what the formal conventions are.

      Originally posted by oiaohm View Post
      This one you should have done more homework because its a great example of personal bias because you have not done your homework. This is perfect example of Whitelist being in fact wrong now but was kind of right a long time back. The correction there is in fact right but the documentation in Linux kernel if you know what to read.


      Lets start with when it was kind of right to use "use for device whitelisting".
      https://www.kernel.org/doc/html/late...1/devices.html

      Code:
      [B]Device Whitelist Controller[/B]
      
      
      [B]1. Description[/B]
      
      Implement a cgroup to track and enforce open and mknod restrictions on device files. A device cgroup associates a device access whitelist with each cgroup. A whitelist entry has 4 fields. ‘type’ is a (all), c (char), or b (block). ‘all’ means it applies to all types and all major and minor numbers. Major and minor are either an integer or * for all. Access is a composition of r (read), w (write), and m (mknod).
      The root device cgroup starts with rwm to ‘all’. A child device cgroup gets a copy of the parent. Administrators can then remove devices from the whitelist or add new entries. A child cgroup can never receive a device access which is denied by its parent.
      
      
      [B]2. User Interface[/B]
      
      An entry is added using devices.allow, and removed using devices.deny. For instance:
      
      echo 'c 1:3 mr' > /sys/fs/cgroup/1/devices.allow
      
      allows cgroup 1 to read and mknod the device usually known as /dev/null. Doing:
      
      echo a > /sys/fs/cgroup/1/devices.deny
      
      will remove the default ‘a [I]:[/I] rwm’ entry. Doing:
      
      echo a > /sys/fs/cgroup/1/devices.allow
      
      will add the ‘a [I]:[/I] rwm’ entry to the whitelist.
      The basic description at the start looks fine do note this is cgroupv1. Do notice you have allow and deny lists inside this so there is already signs that this may not stay a pure whitelist implementation there is one particular problem sentence that is going to trigger the change.
      Code:
      A child cgroup can never receive a device access which is denied by its parent.
      Do you really want to have to list every single device in the system to prevent a container from access only 1 particular device. So this is going to cause a redesign at some point..

      Now lets look at the cgroup v2 where we now find something different that means "use for device access control" is correct and "use for device whitelisting" is now wrong and the weakness has now been corrected.
      https://www.kernel.org/doc/html/late...cgroup-v2.html
      Code:
      [B]Device controller[/B]
      
      Device controller manages access to device files. It includes both creation of new device files (using mknod), and access to the existing device files.
      Cgroup v2 device controller has no interface files and is implemented on top of cgroup BPF. To control access to device files, a user may create bpf programs of the BPF_CGROUP_DEVICE type and attach them to cgroups. On an attempt to access a device file, corresponding BPF programs will be executed, and depending on the return value the attempt will succeed or fail with -EPERM.
      A BPF_CGROUP_DEVICE program takes a pointer to the bpf_cgroup_dev_ctx structure, which describes the device access attempt: access type (mknod/read/write) and device (type, major and minor numbers). If the program returns 0, the attempt fails with -EPERM, otherwise it succeeds.
      An example of BPF_CGROUP_DEVICE program may be found in the kernel source tree in the tools/testing/selftests/bpf/dev_cgroup.c file.
      Cgroup v2 in fact allows directly to have functional deny list(blacklist) instead of allow list(whitelist) and in fact allows you to have a filter list(greylist) that is allows asking user if access to X device is allowed by the BPF program interfacing with userspace program.

      So yes if you only support cgroup v1 using the "'devices' - use for device whitelisting" is kind of right. But once you start supporting cgroup v2 it is "device access control" since now you can have fully functional allow/deny/filter lists that you create by converting what ever information there into a BPF program.

      So you have just found example of where whitelist as a term was right in past and due to Linux kernel changes expanding features is now completely wrong. Device filtering in cgroups v2 is no longer limited to whitelist methods. The BPF method also means something can change from allowed to deny to filter on the fly as well.

      So not all the cases of removal of blacklist or whitelist from Linux kernel and supporting programs has anything todo with political correctness sometimes it simply that the feature has expand that those terms don't work at all any more and that is exactly what you just pointed to. Libvirt with qemu talking about cgroup controller around qemu the term whitelist does not fit any more because they are upgrading to support cgroupv2 where things are different.
      But it seems you didn't do your homework here. The syntax for the cgroup ACL comment refers to is still in the form of a whitelist:
      Code:
      # What cgroup controllers to make use of with QEMU guests
      #
      # - 'cpu' - use for scheduler tunables
      # - 'devices' - use for device whitelisting
      # - 'memory' - use for memory tunables
      # - 'blkio' - use for block devices I/O tunables
      # - 'cpuset' - use for CPUs and memory nodes
      # - 'cpuacct' - use for CPUs statistics.
      #
      # NB, even if configured here, they won't be used unless
      # the administrator has mounted cgroups, e.g.:
      #
      # mkdir /dev/cgroup
      # mount -t cgroup -o devices,cpu,memory,blkio,cpuset none /dev/cgroup
      #
      # They can be mounted anywhere, and different controllers
      # can be mounted in different locations. libvirt will detect
      # where they are located.
      #
      #cgroup_controllers = [ "cpu", "devices", "memory", "blkio", "cpuset", "cpuacct" ]
      
      # This is the basic set of devices allowed / required by
      # all virtual machines.
      #
      # As well as this, any configured block backed disks,
      # all sound device, and all PTY devices are allowed.
      #
      # This will only need setting if newer QEMU suddenly
      # wants some device we don't already know about.
      #
      #cgroup_device_acl = [
      # "/dev/null", "/dev/full", "/dev/zero",
      # "/dev/random", "/dev/urandom",
      # "/dev/ptmx", "/dev/kvm"
      #]
      #
      # RDMA migration requires the following extra files to be added to the list:
      # "/dev/infiniband/rdma_cm",
      # "/dev/infiniband/issm0",
      # "/dev/infiniband/issm1",
      # "/dev/infiniband/umad0",
      # "/dev/infiniband/umad1",
      # "/dev/infiniband/uverbs0"
      cgroup_device_acl = [
          "/dev/null", "/dev/full", "/dev/zero",
          "/dev/random", "/dev/urandom",
          "/dev/ptmx", "/dev/kvm",
          "/dev/rtc","/dev/hpet",
          "/dev/input/by-id/usb-Corsair_Corsair_K70R_Gaming_Keyboard-if02-event-kbd",
          "/dev/input/by-id/usb-Logitech_Gaming_Mouse_G600_3AB43C8E80600017-event-mouse",
          "/dev/input/by-id/usb-Logitech_Gaming_Mouse_G600_3AB43C8E80600017-if01-event-kbd"
      ]
      As you see here, the devices specified in the cgroup_device_acl structure are devices which a VM is allowed to access. You may also notice this is a feature which I'm actively using as well. In this particular instance it allows udev to gracefully hand off my mouse and keyboard to a libvirtd-owned instance of QEmu, toggled by a hotkey combination instead of me having to forward the raw USB devices with a spare keyboard/mouse. Very useful for VMs where I use programs requiring low latency raw inputs (SPICE is neat and all but too buggy and jittery for these applications).

      Its use as a whitelist is clarified by another comment right before the structure as well, but the devices controller in the cgroup_controllers structure has no other purpose but to provide the functionality required by the aforementioned whitelist. There is no blacklist because that's the default action for devices not specified in the whitelist.

      The fact that the underlying cgroup implementation changed over the years is, while true, irrelevant to the libvirtd config file syntax.

      Comment


      • Originally posted by Djhg2000 View Post
        So your whole objection on the matter of using blacklist/whitelist is based on the conflict of conventions with accounting, correct?
        It is the oldest historic documented example is the accounting one. Black and White in accountancy in very mixed ways. Different electrical standards black and white can be good and bad. Religions some black is good. Allow and Deny there is no secondary interpretations that are different.

        Originally posted by Djhg2000 View Post
        To me that makes little sense. Why would accountants use inverted fonts aside from exceptions when printing documents? The extra ink was, and still is, an extra expense no matter how you look at it.
        Its more the question how to write negative values cost effectively. Yes it does depend how you look at it. Putting in a - sign this could be a working and makes the number wider. Inverted and strikethrough takes the same space as the positive version of the number. It comes a question of cost of paper vs cost of ink.

        Think of the following with bold as inverted/strike though..
        1000 1000
        -1000 1000
        See that doing a - sign makes the column have to be one char wider so more paper. Could be a lot more paper think your big A1 and larger accountancy spreadsheets of old and you have made a columns wider to allow for - sign and that means quite a few less columns across the complete page so needing more pages to record the same information.

        Yes accountants do get the art of penny pinching down to a art form sometimes the answer is not quite what you expect. Remember 1400~ paper is very expensive to the point that covering a complete page ink is cheaper than making the page wider to allow for - signs. So it makes historic sense. Also strike though and inverted of early accountant programs for computers made sense for the same kind of reason when you have 40×25 text mode the - sign is a hell of a luxury in screen real estate when you are trying to display as much information as possible on screen. So basically the same problem as 1400~ accountancy so since the old method worked back then lets just use it again.

        Yes blacklist in 1400 accountancy could be a page with everyone who owes you money or it also could be called a whitelist as well by the convention back then because the text is all white. Yes being in the black in accountancy meaning being in the positive is the one based on text colour. This would be being in the white for positive if you were using accountancy standard that uses paper colour. Yes accountancy form 1400 right up to current day has conflicting standards over do you go by text color or text background color when you word things.

        So be thankful computer Black/White/Grey lists is based on the Holywood trope if it was based on 1400~ accountancy defines of blacklist/whitelist and greylist it would be truly as clear as mud with no right answer. The accountancy one with black/white and grey list is why at times you don't want to go back to very first usage of a term instead want to follow origin tree.

        Originally posted by Djhg2000 View Post
        But it seems you didn't do your homework here. The syntax for the cgroup ACL comment refers to is still in the form of a whitelist:
        Code:
        # What cgroup controllers to make use of with QEMU guests
        # mkdir /dev/cgroup
        # mount -t cgroup -o devices,cpu,memory,blkio,cpuset none /dev/cgroup
        Note the -t cgroup if the code base has been updated to full cgroup v2 that would be -t cgroup2.

        The process of upgrading a code base from cgroupv1 to cgroupv2 can see a collection of different upgrades. Yes some of the documentations upgrades at times happen before the core code.

        Really you have not done your homework on what upstream cgroups is doing to understand how libvirt will be changing in future to see that a documentation change is just a precursor change for the other changes that will come to the code base as well.

        Comment


        • Originally posted by oiaohm View Post
          It is the oldest historic documented example is the accounting one. Black and White in accountancy in very mixed ways. Different electrical standards black and white can be good and bad. Religions some black is good. Allow and Deny there is no secondary interpretations that are different.
          But flipping the tables I think it's reasonable to expect software developers to figure out the gist of what blacklist/whitelist means, just like it's reasonable to expect accountants to figure out what black and inverted/white/red ink means.

          Originally posted by oiaohm View Post
          Its more the question how to write negative values cost effectively. Yes it does depend how you look at it. Putting in a - sign this could be a working and makes the number wider. Inverted and strikethrough takes the same space as the positive version of the number. It comes a question of cost of paper vs cost of ink.

          Think of the following with bold as inverted/strike though..
          1000 1000
          -1000 1000
          See that doing a - sign makes the column have to be one char wider so more paper. Could be a lot more paper think your big A1 and larger accountancy spreadsheets of old and you have made a columns wider to allow for - sign and that means quite a few less columns across the complete page so needing more pages to record the same information.

          Yes accountants do get the art of penny pinching down to a art form sometimes the answer is not quite what you expect. Remember 1400~ paper is very expensive to the point that covering a complete page ink is cheaper than making the page wider to allow for - signs. So it makes historic sense. Also strike though and inverted of early accountant programs for computers made sense for the same kind of reason when you have 40×25 text mode the - sign is a hell of a luxury in screen real estate when you are trying to display as much information as possible on screen. So basically the same problem as 1400~ accountancy so since the old method worked back then lets just use it again.

          Yes blacklist in 1400 accountancy could be a page with everyone who owes you money or it also could be called a whitelist as well by the convention back then because the text is all white. Yes being in the black in accountancy meaning being in the positive is the one based on text colour. This would be being in the white for positive if you were using accountancy standard that uses paper colour. Yes accountancy form 1400 right up to current day has conflicting standards over do you go by text color or text background color when you word things.

          So be thankful computer Black/White/Grey lists is based on the Holywood trope if it was based on 1400~ accountancy defines of blacklist/whitelist and greylist it would be truly as clear as mud with no right answer. The accountancy one with black/white and grey list is why at times you don't want to go back to very first usage of a term instead want to follow origin tree.
          So basically accountancy has its own separate mess. Why should it have any bearing on for what blacklist and whitelist does then? From my perspective it seems more logical for accountants to figure out their conventions on their own and stay away from interfering with conventions from the rest of the world.

          Engineering is full of ambiguous conventions when you put them side by side, but they all at least somewhat make sense in their own context (except perhaps X11 server/client).

          Originally posted by oiaohm View Post
          Note the -t cgroup if the code base has been updated to full cgroup v2 that would be -t cgroup2.

          The process of upgrading a code base from cgroupv1 to cgroupv2 can see a collection of different upgrades. Yes some of the documentations upgrades at times happen before the core code.

          Really you have not done your homework on what upstream cgroups is doing to understand how libvirt will be changing in future to see that a documentation change is just a precursor change for the other changes that will come to the code base as well.
          That's not my full quote so I'd prefer if you use the "[...]" convention (or similar) when splicing parts together.

          Changing the documentation before the code is doing it the wrong way around. Unless you have the new code figured out there's no way to know you're not misleading the users.

          Besides, it wouldn't make sense to introduce blacklists in this context unless you want them to be a global overrides, in which case the blacklist wouldn't be in the device controller itself (since its function is to be an intermediate between libvirt and cgroups), but rather an internal hierarchy in libvirt and the comment would be even further misleading. It's being called a whitelist because it is a whitelist.

          Comment


          • Originally posted by Djhg2000 View Post
            But flipping the tables I think it's reasonable to expect software developers to figure out the gist of what blacklist/whitelist means, just like it's reasonable to expect accountants to figure out what black and inverted/white/red ink means.
            Its not exactly problem ends. Accountancy is the oldest but not the only one. Also those doing accountancy are the ones who are processing lots of product return/refund and after sales information. So it is at times important that software developers and accountancy are on the same understand.

            A driver adding particular hardware to a deny list can be something accountancy people have requested because they know that X bit of hardware not working with Y driver. Using blacklist/Whitelist here could lead to screw up.

            Originally posted by Djhg2000 View Post
            So basically accountancy has its own separate mess. Why should it have any bearing on for what blacklist and whitelist does then? From my perspective it seems more logical for accountants to figure out their conventions on their own and stay away from interfering with conventions from the rest of the world.

            Engineering is full of ambiguous conventions when you put them side by side, but they all at least somewhat make sense in their own context (except perhaps X11 server/client).
            Electrical engineering with USA AC and other things use black and white in different areas with inverted meaning to the computer world black/white as well. So this lead to software developer having to talk to a electrical engineer who made the hardware who has a different idea at the time what black and white means. Yes this has lead to some historic code goofs in drivers. Allow and Deny usage instead prevents these problems.

            I see blacklist and whitelist as source operational problems. The colour black is not always bad and white is not always good this is what causes the communication problem. Different fields define it differently. Heck you have USA people using words positive and negative with AC instead of correct terms active and neutral.

            Yes Engineering is full of lots of ambiguous conventions and over time they are normally causes of fatal disasters. If it possible to get rid of a ambiguous convention by replacing it with are more exact meaning term the long term result is good outcome for lives.

            Originally posted by Djhg2000 View Post
            Changing the documentation before the code is doing it the wrong way around. Unless you have the new code figured out there's no way to know you're not misleading the users.
            In the case of cgroupv1 to cgroupv2 you can use terms that will cover cgroup v1 and cgroup v2. Yes changing the documentation before changing the code is kind of the wrong way round but this is a side effect of lack of documentation writers so when there is a documentation writer on hand the documentation has to be made as future proof as possible because you might be waiting like 5 years before another touches the documentation.

            The case of cgroup v1 and cgroupv2 the upstream infrastructure in the Linux kernel is already set in stone so you can make some very solid predictions even before the userspace code updates.


            Originally posted by Djhg2000 View Post
            Besides, it wouldn't make sense to introduce blacklists in this context unless you want them to be a global overrides, in which case the blacklist wouldn't be in the device controller itself (since its function is to be an intermediate between libvirt and cgroups), but rather an internal hierarchy in libvirt and the comment would be even further misleading. It's being called a whitelist because it is a whitelist.
            When the code base gets updated to cgroupv2 and no documentation writer is there to update the documentation using the term whitelist is going to be a problem.

            For a virtual machine manager deny list model can make sense instead of a allow list model one on a particular vm. Think hypervisor passing all not assigned devices though to particular virtual machine making up a list of known assigned devices is possible to deny access to is simple. Making up a allow list for all future possible devices that someone could plug in that is another problem. There are some cgroup v2 using hypervisor designs that do in fact have a allow and deny list modes.

            libvirt could implement proper deny list in future and cgroup v2 allows this.

            Comment


            • Originally posted by oiaohm View Post
              Its not exactly problem ends. Accountancy is the oldest but not the only one. Also those doing accountancy are the ones who are processing lots of product return/refund and after sales information. So it is at times important that software developers and accountancy are on the same understand.

              A driver adding particular hardware to a deny list can be something accountancy people have requested because they know that X bit of hardware not working with Y driver. Using blacklist/Whitelist here could lead to screw up.
              I'm pretty sure accountancy isn't the oldest use of black and white to signify different meanings. It's far more likely that we, as a species, connected the day and night cycle with different levels of risk. After all many of the natural predators which could challenge our survival are more active at night. Furthermore we have terrible eyesight in low light conditions compared to other animals so in a direct battle we wouldn't stand much of a chance. This is a more likely origin of the convention of using dark and bright for bad and good, evolving over time with our spoken languages to black and white for negative and positive. In Scandinavian languages you can still see the traces of this where dark and bright are more or less synonymous with bad/depressed and good/joyful in most contexts. AFAIK similar traits are also present in English.

              I don't know how much overlap we have between accountants and kernel developers, but I guess albeit small there's still a point there. I don't think that's enough of a justification to change though.

              Originally posted by oiaohm View Post
              Electrical engineering with USA AC and other things use black and white in different areas with inverted meaning to the computer world black/white as well. So this lead to software developer having to talk to a electrical engineer who made the hardware who has a different idea at the time what black and white means. Yes this has lead to some historic code goofs in drivers. Allow and Deny usage instead prevents these problems.

              I see blacklist and whitelist as source operational problems. The colour black is not always bad and white is not always good this is what causes the communication problem. Different fields define it differently. Heck you have USA people using words positive and negative with AC instead of correct terms active and neutral.

              Yes Engineering is full of lots of ambiguous conventions and over time they are normally causes of fatal disasters. If it possible to get rid of a ambiguous convention by replacing it with are more exact meaning term the long term result is good outcome for lives.
              But how far should we take this? Even in electrical engineering there's still ambiguous or flat out counter intuitive conventions.

              Plenty of screw ups have happened because we use negative to describe the source of electrons and positive as the sink for electrons. Should we have another go at hard fliping those around to be intuitive tomorrow? Last time was a disaster for those who tried and we'd have a very clear generational divide where the two generations simply cannot work together and have experiences lost with the older generation. I don't see conventional flow going anywhere soon but at least we've started laying the groundwork for a slow change over future generations by insisting we call it conventional flow.

              Or should we perhaps come up with a new convention for representing torque in mechanical engineering (double tip arrows) that doesn't require you to know the right hand rule?

              We have to accept ambiguity. There's simply no way around it without resorting to dictator-like methods where one field sets the standard for everyone and it's up to the hands on engineers to deal with the resulting paradoxes.

              Originally posted by oiaohm View Post
              In the case of cgroupv1 to cgroupv2 you can use terms that will cover cgroup v1 and cgroup v2. Yes changing the documentation before changing the code is kind of the wrong way round but this is a side effect of lack of documentation writers so when there is a documentation writer on hand the documentation has to be made as future proof as possible because you might be waiting like 5 years before another touches the documentation.

              The case of cgroup v1 and cgroupv2 the upstream infrastructure in the Linux kernel is already set in stone so you can make some very solid predictions even before the userspace code updates.
              If the documentation is ambiguous now, what tells you it's going to be immediately clear one rewrite down the line?

              Originally posted by oiaohm View Post
              When the code base gets updated to cgroupv2 and no documentation writer is there to update the documentation using the term whitelist is going to be a problem.

              For a virtual machine manager deny list model can make sense instead of a allow list model one on a particular vm. Think hypervisor passing all not assigned devices though to particular virtual machine making up a list of known assigned devices is possible to deny access to is simple. Making up a allow list for all future possible devices that someone could plug in that is another problem. There are some cgroup v2 using hypervisor designs that do in fact have a allow and deny list modes.

              libvirt could implement proper deny list in future and cgroup v2 allows this.
              This is the global libvirt config file. Anything included on this whitelist can be made available to QEmu instances spawned from libvirtd, but libvirtd still controls what the VM can access. There's no difference as far as the VM is concerned, the whitelist is entirely internal to the libvirt infrastructure. It's a binary decision, either it's on the whitelist or it's on the implicit blacklist. There's no middle ground here on the global level.

              Now that I think about it each libvirt VM configuration already has its own whitelist and implicit blacklist as well. You can't just enable it in the global config and expect it to work, the same entries still need to be added to the QEmu commandline in the VM configuration.

              Comment


              • Originally posted by Djhg2000 View Post
                I'm pretty sure accountancy isn't the oldest use of black and white to signify different meanings. It's far more likely that we, as a species, connected the day and night cycle with different levels of risk. After all many of the natural predators which could challenge our survival are more active at night. Furthermore we have terrible eyesight in low light conditions compared to other animals so in a direct battle we wouldn't stand much of a chance.
                This is a mistake and underestimated humans. Once you understand the mistake the features of Vampires and the link between Vampires and bats makes sense.
                https://en.wikipedia.org/wiki/Human_echolocation

                Turns out just like bats we humans can do echolocation. The fact we live in the light a lot these days we don't train in echolocation means we don't have the skill as common any more. The clicking based languages in fact go back to echolocation. So humans with echolocation are highly effective in the dark. Those human groups who were skilled in echolocation black is good white is bad as they had the advantage at night over the normal pray targets of humans. Yes you are right English as a bias to light being good and dark being bad. Problem here is that not true for every language there are languages with bias the other way as well. As we get more diversity in programmers from different language backgrounds the issues from using black and white get worse due to the native language differences as well.

                This is the problem you have presumed that there is a hard constant rule that white is good and back is bad. Humans it not a hard constant rule. Accountancy in english documents is the first were we see inversion with black and white. But the inverted usage of black and white goes back into other languages because it makes sense once you are aware that humans in hunting methods are a lot closer to bats where we can use both vision and echolocation. Remember lots of bats when they have light use vision over echolocation as well. Yes echolocation could explain how particular tombs in Egypt were in fact caved since humans don't in fact need to use their eyes to work out where they are in a space if they are skilled in echolocation.

                Originally posted by Djhg2000 View Post
                I don't know how much overlap we have between accountants and kernel developers, but I guess albeit small there's still a point there. I don't think that's enough of a justification to change though.
                There is more than what you would think. Most kernel developers are full time paid start so having to explain what they have been doing to justify their wages to human resources and this finally goes up to accountancy. So anything that causes confusion between accountancy side and kernel developers is not exactly good as this can result in hours of the developers time being wasted sorting out the miss communication to keep on getting paid.


                Originally posted by Djhg2000 View Post
                IWe have to accept ambiguity. There's simply no way around it without resorting to dictator-like methods where one field sets the standard for everyone and it's up to the hands on engineers to deal with the resulting paradoxes.
                Yes we have to accept there is going to be ambiguity in terms but when we have a chance of getting rid of it.

                Originally posted by Djhg2000 View Post
                If the documentation is ambiguous now, what tells you it's going to be immediately clear one rewrite down the line?
                The horrible reality is most open source world documentation is constantly in a state of different levels of ambiguous. Because there is really not enough documentation writers to be rewriting the documentation as often as it need to be to remain perfectly clear. If you look across the libvirt documentation there are other areas that are vaguely written that have nothing todo with black/white list stuff. So the style of the documentation writers libvirt project has uses ambiguous to leave space for future features. This unfortunately is a fairly common open source documentation style. Yes hating this style I agree with. Thinking that the change was some political correctness thing is you reading more into it than it really was. Yes the ambiguous writing style to leave space for future features is a totally different problem to the political correctness thing and makes learning curve of many bits of open source and closed source software harder.

                Its really simple to join up ambiguous writing style problem and political correctness changes incorrectly without doing correct research into why the change and the writing style in use. Libvirt change you pulled out should not be linked to political correctness as that is not the cause. Ambiguous writing style to leave space for future features is the cause there. If you want to hate that I back it. To reduce ambiguous writing style being used to leave space for future features we need more documentation writers so documentation can always be kept current so removing the temptation to use it.

                Comment


                • But what about the auxiliary temperature sensor?

                  Comment


                  • Originally posted by computerquip View Post

                    Nearly 150K dead because a group thought masks took their freedoms away, they believed that COVID-19 was a hoax, and they opened our country up without really caring about what our health experts were saying.
                    I wanna just say first, I'm not your enemy and I don't hold any animosity towards you.
                    Most of those deaths are from New York, New Jersey, and Michigan. All 3 of these states implemented the most extreme lockdowns and mandates regarding the illness.
                    In Michigan and New York in particular, huge chunks of the total death counts occurred in nursing homes. Nursing homes that were sent Covid-positive patients, many of which weren't even elderly people. There are many criminal negligence lawsuits being filed around these deaths.
                    The other important thing to realize is that death certificates were marked with Covid-19 as the cause of death, despite people dying of heart-attacks, cancer, and other serious illnesses that they had been fighting a losing battle against for years. A doctor even published orders from the State Govt to do this, and was viciously attacked over it.
                    Many family members of deceased people have come forward about the cause of death being marked Covid despite clearly dying from another cause.
                    Lastly, the total number of dead is actually quite normal and consistent with a serious Influenza season, and this is especially pertinent when considering the massive drops in other cause of death designations while Covid has been happening.
                    Last month the CDC published a report that stated that only around 15 thousand people died solely of Covid with no other major co-morbidities
                    I know my position probably seems insane to you, but I urge you to look into the things I mentioned.
                    Many major corporations have had record earnings during this pandemic while their competition was decimated. Many private businesses are gone. Many people have killed themselves over the lockdowns. The mask itself is violation of our rights when mandated by local, state, or federal govt's. If private companies want to require it, that's up to them. Most of the protests I saw being talked about weren't specifically over masks, but over the lockdowns.

                    Comment

                    Working...
                    X