Announcement

Collapse
No announcement yet.

Linux Sound Subsystem Begins Cleaning Up Its Terminology To Meet Inclusive Guidelines

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • drownthepoor
    replied
    Originally posted by computerquip View Post

    Nearly 150K dead because a group thought masks took their freedoms away, they believed that COVID-19 was a hoax, and they opened our country up without really caring about what our health experts were saying.
    I wanna just say first, I'm not your enemy and I don't hold any animosity towards you.
    Most of those deaths are from New York, New Jersey, and Michigan. All 3 of these states implemented the most extreme lockdowns and mandates regarding the illness.
    In Michigan and New York in particular, huge chunks of the total death counts occurred in nursing homes. Nursing homes that were sent Covid-positive patients, many of which weren't even elderly people. There are many criminal negligence lawsuits being filed around these deaths.
    The other important thing to realize is that death certificates were marked with Covid-19 as the cause of death, despite people dying of heart-attacks, cancer, and other serious illnesses that they had been fighting a losing battle against for years. A doctor even published orders from the State Govt to do this, and was viciously attacked over it.
    Many family members of deceased people have come forward about the cause of death being marked Covid despite clearly dying from another cause.
    Lastly, the total number of dead is actually quite normal and consistent with a serious Influenza season, and this is especially pertinent when considering the massive drops in other cause of death designations while Covid has been happening.
    Last month the CDC published a report that stated that only around 15 thousand people died solely of Covid with no other major co-morbidities
    I know my position probably seems insane to you, but I urge you to look into the things I mentioned.
    Many major corporations have had record earnings during this pandemic while their competition was decimated. Many private businesses are gone. Many people have killed themselves over the lockdowns. The mask itself is violation of our rights when mandated by local, state, or federal govt's. If private companies want to require it, that's up to them. Most of the protests I saw being talked about weren't specifically over masks, but over the lockdowns.

    Leave a comment:


  • Kerisun
    replied
    But what about the auxiliary temperature sensor?

    Leave a comment:


  • oiaohm
    replied
    Originally posted by Djhg2000 View Post
    I'm pretty sure accountancy isn't the oldest use of black and white to signify different meanings. It's far more likely that we, as a species, connected the day and night cycle with different levels of risk. After all many of the natural predators which could challenge our survival are more active at night. Furthermore we have terrible eyesight in low light conditions compared to other animals so in a direct battle we wouldn't stand much of a chance.
    This is a mistake and underestimated humans. Once you understand the mistake the features of Vampires and the link between Vampires and bats makes sense.


    Turns out just like bats we humans can do echolocation. The fact we live in the light a lot these days we don't train in echolocation means we don't have the skill as common any more. The clicking based languages in fact go back to echolocation. So humans with echolocation are highly effective in the dark. Those human groups who were skilled in echolocation black is good white is bad as they had the advantage at night over the normal pray targets of humans. Yes you are right English as a bias to light being good and dark being bad. Problem here is that not true for every language there are languages with bias the other way as well. As we get more diversity in programmers from different language backgrounds the issues from using black and white get worse due to the native language differences as well.

    This is the problem you have presumed that there is a hard constant rule that white is good and back is bad. Humans it not a hard constant rule. Accountancy in english documents is the first were we see inversion with black and white. But the inverted usage of black and white goes back into other languages because it makes sense once you are aware that humans in hunting methods are a lot closer to bats where we can use both vision and echolocation. Remember lots of bats when they have light use vision over echolocation as well. Yes echolocation could explain how particular tombs in Egypt were in fact caved since humans don't in fact need to use their eyes to work out where they are in a space if they are skilled in echolocation.

    Originally posted by Djhg2000 View Post
    I don't know how much overlap we have between accountants and kernel developers, but I guess albeit small there's still a point there. I don't think that's enough of a justification to change though.
    There is more than what you would think. Most kernel developers are full time paid start so having to explain what they have been doing to justify their wages to human resources and this finally goes up to accountancy. So anything that causes confusion between accountancy side and kernel developers is not exactly good as this can result in hours of the developers time being wasted sorting out the miss communication to keep on getting paid.


    Originally posted by Djhg2000 View Post
    IWe have to accept ambiguity. There's simply no way around it without resorting to dictator-like methods where one field sets the standard for everyone and it's up to the hands on engineers to deal with the resulting paradoxes.
    Yes we have to accept there is going to be ambiguity in terms but when we have a chance of getting rid of it.

    Originally posted by Djhg2000 View Post
    If the documentation is ambiguous now, what tells you it's going to be immediately clear one rewrite down the line?
    The horrible reality is most open source world documentation is constantly in a state of different levels of ambiguous. Because there is really not enough documentation writers to be rewriting the documentation as often as it need to be to remain perfectly clear. If you look across the libvirt documentation there are other areas that are vaguely written that have nothing todo with black/white list stuff. So the style of the documentation writers libvirt project has uses ambiguous to leave space for future features. This unfortunately is a fairly common open source documentation style. Yes hating this style I agree with. Thinking that the change was some political correctness thing is you reading more into it than it really was. Yes the ambiguous writing style to leave space for future features is a totally different problem to the political correctness thing and makes learning curve of many bits of open source and closed source software harder.

    Its really simple to join up ambiguous writing style problem and political correctness changes incorrectly without doing correct research into why the change and the writing style in use. Libvirt change you pulled out should not be linked to political correctness as that is not the cause. Ambiguous writing style to leave space for future features is the cause there. If you want to hate that I back it. To reduce ambiguous writing style being used to leave space for future features we need more documentation writers so documentation can always be kept current so removing the temptation to use it.

    Leave a comment:


  • Djhg2000
    replied
    Originally posted by oiaohm View Post
    Its not exactly problem ends. Accountancy is the oldest but not the only one. Also those doing accountancy are the ones who are processing lots of product return/refund and after sales information. So it is at times important that software developers and accountancy are on the same understand.

    A driver adding particular hardware to a deny list can be something accountancy people have requested because they know that X bit of hardware not working with Y driver. Using blacklist/Whitelist here could lead to screw up.
    I'm pretty sure accountancy isn't the oldest use of black and white to signify different meanings. It's far more likely that we, as a species, connected the day and night cycle with different levels of risk. After all many of the natural predators which could challenge our survival are more active at night. Furthermore we have terrible eyesight in low light conditions compared to other animals so in a direct battle we wouldn't stand much of a chance. This is a more likely origin of the convention of using dark and bright for bad and good, evolving over time with our spoken languages to black and white for negative and positive. In Scandinavian languages you can still see the traces of this where dark and bright are more or less synonymous with bad/depressed and good/joyful in most contexts. AFAIK similar traits are also present in English.

    I don't know how much overlap we have between accountants and kernel developers, but I guess albeit small there's still a point there. I don't think that's enough of a justification to change though.

    Originally posted by oiaohm View Post
    Electrical engineering with USA AC and other things use black and white in different areas with inverted meaning to the computer world black/white as well. So this lead to software developer having to talk to a electrical engineer who made the hardware who has a different idea at the time what black and white means. Yes this has lead to some historic code goofs in drivers. Allow and Deny usage instead prevents these problems.

    I see blacklist and whitelist as source operational problems. The colour black is not always bad and white is not always good this is what causes the communication problem. Different fields define it differently. Heck you have USA people using words positive and negative with AC instead of correct terms active and neutral.

    Yes Engineering is full of lots of ambiguous conventions and over time they are normally causes of fatal disasters. If it possible to get rid of a ambiguous convention by replacing it with are more exact meaning term the long term result is good outcome for lives.
    But how far should we take this? Even in electrical engineering there's still ambiguous or flat out counter intuitive conventions.

    Plenty of screw ups have happened because we use negative to describe the source of electrons and positive as the sink for electrons. Should we have another go at hard fliping those around to be intuitive tomorrow? Last time was a disaster for those who tried and we'd have a very clear generational divide where the two generations simply cannot work together and have experiences lost with the older generation. I don't see conventional flow going anywhere soon but at least we've started laying the groundwork for a slow change over future generations by insisting we call it conventional flow.

    Or should we perhaps come up with a new convention for representing torque in mechanical engineering (double tip arrows) that doesn't require you to know the right hand rule?

    We have to accept ambiguity. There's simply no way around it without resorting to dictator-like methods where one field sets the standard for everyone and it's up to the hands on engineers to deal with the resulting paradoxes.

    Originally posted by oiaohm View Post
    In the case of cgroupv1 to cgroupv2 you can use terms that will cover cgroup v1 and cgroup v2. Yes changing the documentation before changing the code is kind of the wrong way round but this is a side effect of lack of documentation writers so when there is a documentation writer on hand the documentation has to be made as future proof as possible because you might be waiting like 5 years before another touches the documentation.

    The case of cgroup v1 and cgroupv2 the upstream infrastructure in the Linux kernel is already set in stone so you can make some very solid predictions even before the userspace code updates.
    If the documentation is ambiguous now, what tells you it's going to be immediately clear one rewrite down the line?

    Originally posted by oiaohm View Post
    When the code base gets updated to cgroupv2 and no documentation writer is there to update the documentation using the term whitelist is going to be a problem.

    For a virtual machine manager deny list model can make sense instead of a allow list model one on a particular vm. Think hypervisor passing all not assigned devices though to particular virtual machine making up a list of known assigned devices is possible to deny access to is simple. Making up a allow list for all future possible devices that someone could plug in that is another problem. There are some cgroup v2 using hypervisor designs that do in fact have a allow and deny list modes.

    libvirt could implement proper deny list in future and cgroup v2 allows this.
    This is the global libvirt config file. Anything included on this whitelist can be made available to QEmu instances spawned from libvirtd, but libvirtd still controls what the VM can access. There's no difference as far as the VM is concerned, the whitelist is entirely internal to the libvirt infrastructure. It's a binary decision, either it's on the whitelist or it's on the implicit blacklist. There's no middle ground here on the global level.

    Now that I think about it each libvirt VM configuration already has its own whitelist and implicit blacklist as well. You can't just enable it in the global config and expect it to work, the same entries still need to be added to the QEmu commandline in the VM configuration.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by Djhg2000 View Post
    But flipping the tables I think it's reasonable to expect software developers to figure out the gist of what blacklist/whitelist means, just like it's reasonable to expect accountants to figure out what black and inverted/white/red ink means.
    Its not exactly problem ends. Accountancy is the oldest but not the only one. Also those doing accountancy are the ones who are processing lots of product return/refund and after sales information. So it is at times important that software developers and accountancy are on the same understand.

    A driver adding particular hardware to a deny list can be something accountancy people have requested because they know that X bit of hardware not working with Y driver. Using blacklist/Whitelist here could lead to screw up.

    Originally posted by Djhg2000 View Post
    So basically accountancy has its own separate mess. Why should it have any bearing on for what blacklist and whitelist does then? From my perspective it seems more logical for accountants to figure out their conventions on their own and stay away from interfering with conventions from the rest of the world.

    Engineering is full of ambiguous conventions when you put them side by side, but they all at least somewhat make sense in their own context (except perhaps X11 server/client).
    Electrical engineering with USA AC and other things use black and white in different areas with inverted meaning to the computer world black/white as well. So this lead to software developer having to talk to a electrical engineer who made the hardware who has a different idea at the time what black and white means. Yes this has lead to some historic code goofs in drivers. Allow and Deny usage instead prevents these problems.

    I see blacklist and whitelist as source operational problems. The colour black is not always bad and white is not always good this is what causes the communication problem. Different fields define it differently. Heck you have USA people using words positive and negative with AC instead of correct terms active and neutral.

    Yes Engineering is full of lots of ambiguous conventions and over time they are normally causes of fatal disasters. If it possible to get rid of a ambiguous convention by replacing it with are more exact meaning term the long term result is good outcome for lives.

    Originally posted by Djhg2000 View Post
    Changing the documentation before the code is doing it the wrong way around. Unless you have the new code figured out there's no way to know you're not misleading the users.
    In the case of cgroupv1 to cgroupv2 you can use terms that will cover cgroup v1 and cgroup v2. Yes changing the documentation before changing the code is kind of the wrong way round but this is a side effect of lack of documentation writers so when there is a documentation writer on hand the documentation has to be made as future proof as possible because you might be waiting like 5 years before another touches the documentation.

    The case of cgroup v1 and cgroupv2 the upstream infrastructure in the Linux kernel is already set in stone so you can make some very solid predictions even before the userspace code updates.


    Originally posted by Djhg2000 View Post
    Besides, it wouldn't make sense to introduce blacklists in this context unless you want them to be a global overrides, in which case the blacklist wouldn't be in the device controller itself (since its function is to be an intermediate between libvirt and cgroups), but rather an internal hierarchy in libvirt and the comment would be even further misleading. It's being called a whitelist because it is a whitelist.
    When the code base gets updated to cgroupv2 and no documentation writer is there to update the documentation using the term whitelist is going to be a problem.

    For a virtual machine manager deny list model can make sense instead of a allow list model one on a particular vm. Think hypervisor passing all not assigned devices though to particular virtual machine making up a list of known assigned devices is possible to deny access to is simple. Making up a allow list for all future possible devices that someone could plug in that is another problem. There are some cgroup v2 using hypervisor designs that do in fact have a allow and deny list modes.

    libvirt could implement proper deny list in future and cgroup v2 allows this.

    Leave a comment:


  • Djhg2000
    replied
    Originally posted by oiaohm View Post
    It is the oldest historic documented example is the accounting one. Black and White in accountancy in very mixed ways. Different electrical standards black and white can be good and bad. Religions some black is good. Allow and Deny there is no secondary interpretations that are different.
    But flipping the tables I think it's reasonable to expect software developers to figure out the gist of what blacklist/whitelist means, just like it's reasonable to expect accountants to figure out what black and inverted/white/red ink means.

    Originally posted by oiaohm View Post
    Its more the question how to write negative values cost effectively. Yes it does depend how you look at it. Putting in a - sign this could be a working and makes the number wider. Inverted and strikethrough takes the same space as the positive version of the number. It comes a question of cost of paper vs cost of ink.

    Think of the following with bold as inverted/strike though..
    1000 1000
    -1000 1000
    See that doing a - sign makes the column have to be one char wider so more paper. Could be a lot more paper think your big A1 and larger accountancy spreadsheets of old and you have made a columns wider to allow for - sign and that means quite a few less columns across the complete page so needing more pages to record the same information.

    Yes accountants do get the art of penny pinching down to a art form sometimes the answer is not quite what you expect. Remember 1400~ paper is very expensive to the point that covering a complete page ink is cheaper than making the page wider to allow for - signs. So it makes historic sense. Also strike though and inverted of early accountant programs for computers made sense for the same kind of reason when you have 40×25 text mode the - sign is a hell of a luxury in screen real estate when you are trying to display as much information as possible on screen. So basically the same problem as 1400~ accountancy so since the old method worked back then lets just use it again.

    Yes blacklist in 1400 accountancy could be a page with everyone who owes you money or it also could be called a whitelist as well by the convention back then because the text is all white. Yes being in the black in accountancy meaning being in the positive is the one based on text colour. This would be being in the white for positive if you were using accountancy standard that uses paper colour. Yes accountancy form 1400 right up to current day has conflicting standards over do you go by text color or text background color when you word things.

    So be thankful computer Black/White/Grey lists is based on the Holywood trope if it was based on 1400~ accountancy defines of blacklist/whitelist and greylist it would be truly as clear as mud with no right answer. The accountancy one with black/white and grey list is why at times you don't want to go back to very first usage of a term instead want to follow origin tree.
    So basically accountancy has its own separate mess. Why should it have any bearing on for what blacklist and whitelist does then? From my perspective it seems more logical for accountants to figure out their conventions on their own and stay away from interfering with conventions from the rest of the world.

    Engineering is full of ambiguous conventions when you put them side by side, but they all at least somewhat make sense in their own context (except perhaps X11 server/client).

    Originally posted by oiaohm View Post
    Note the -t cgroup if the code base has been updated to full cgroup v2 that would be -t cgroup2.

    The process of upgrading a code base from cgroupv1 to cgroupv2 can see a collection of different upgrades. Yes some of the documentations upgrades at times happen before the core code.

    Really you have not done your homework on what upstream cgroups is doing to understand how libvirt will be changing in future to see that a documentation change is just a precursor change for the other changes that will come to the code base as well.
    That's not my full quote so I'd prefer if you use the "[...]" convention (or similar) when splicing parts together.

    Changing the documentation before the code is doing it the wrong way around. Unless you have the new code figured out there's no way to know you're not misleading the users.

    Besides, it wouldn't make sense to introduce blacklists in this context unless you want them to be a global overrides, in which case the blacklist wouldn't be in the device controller itself (since its function is to be an intermediate between libvirt and cgroups), but rather an internal hierarchy in libvirt and the comment would be even further misleading. It's being called a whitelist because it is a whitelist.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by Djhg2000 View Post
    So your whole objection on the matter of using blacklist/whitelist is based on the conflict of conventions with accounting, correct?
    It is the oldest historic documented example is the accounting one. Black and White in accountancy in very mixed ways. Different electrical standards black and white can be good and bad. Religions some black is good. Allow and Deny there is no secondary interpretations that are different.

    Originally posted by Djhg2000 View Post
    To me that makes little sense. Why would accountants use inverted fonts aside from exceptions when printing documents? The extra ink was, and still is, an extra expense no matter how you look at it.
    Its more the question how to write negative values cost effectively. Yes it does depend how you look at it. Putting in a - sign this could be a working and makes the number wider. Inverted and strikethrough takes the same space as the positive version of the number. It comes a question of cost of paper vs cost of ink.

    Think of the following with bold as inverted/strike though..
    1000 1000
    -1000 1000
    See that doing a - sign makes the column have to be one char wider so more paper. Could be a lot more paper think your big A1 and larger accountancy spreadsheets of old and you have made a columns wider to allow for - sign and that means quite a few less columns across the complete page so needing more pages to record the same information.

    Yes accountants do get the art of penny pinching down to a art form sometimes the answer is not quite what you expect. Remember 1400~ paper is very expensive to the point that covering a complete page ink is cheaper than making the page wider to allow for - signs. So it makes historic sense. Also strike though and inverted of early accountant programs for computers made sense for the same kind of reason when you have 40×25 text mode the - sign is a hell of a luxury in screen real estate when you are trying to display as much information as possible on screen. So basically the same problem as 1400~ accountancy so since the old method worked back then lets just use it again.

    Yes blacklist in 1400 accountancy could be a page with everyone who owes you money or it also could be called a whitelist as well by the convention back then because the text is all white. Yes being in the black in accountancy meaning being in the positive is the one based on text colour. This would be being in the white for positive if you were using accountancy standard that uses paper colour. Yes accountancy form 1400 right up to current day has conflicting standards over do you go by text color or text background color when you word things.

    So be thankful computer Black/White/Grey lists is based on the Holywood trope if it was based on 1400~ accountancy defines of blacklist/whitelist and greylist it would be truly as clear as mud with no right answer. The accountancy one with black/white and grey list is why at times you don't want to go back to very first usage of a term instead want to follow origin tree.

    Originally posted by Djhg2000 View Post
    But it seems you didn't do your homework here. The syntax for the cgroup ACL comment refers to is still in the form of a whitelist:
    Code:
    # What cgroup controllers to make use of with QEMU guests
    # mkdir /dev/cgroup
    # mount -t cgroup -o devices,cpu,memory,blkio,cpuset none /dev/cgroup
    Note the -t cgroup if the code base has been updated to full cgroup v2 that would be -t cgroup2.

    The process of upgrading a code base from cgroupv1 to cgroupv2 can see a collection of different upgrades. Yes some of the documentations upgrades at times happen before the core code.

    Really you have not done your homework on what upstream cgroups is doing to understand how libvirt will be changing in future to see that a documentation change is just a precursor change for the other changes that will come to the code base as well.

    Leave a comment:


  • Djhg2000
    replied
    Originally posted by oiaohm View Post
    To be correct when writing into standard and it a mandatory directive historic documents like X11 standard mandated capitalisation of first letter. Yes it a convention we forget today but that why when I was talking about X11 I typed it that way.
    This only applies to the specific instances though. I'm talking about master/slave as generic terms, not in reference to any specific subset of software.

    Originally posted by oiaohm View Post
    Exactly the history of the computer origin of blacklist/Whitelist and Greylist that is provable by code comments has no racist link at all.
    So your whole objection on the matter of using blacklist/whitelist is based on the conflict of conventions with accounting, correct?

    Originally posted by oiaohm View Post
    Yes true and this lead to debating this stuff without the base knowledge.
    I'd argue this is a much bigger issue than the terminology nitpicking we're dealing with in this thread, especially as it affects everything we use search engines for. It's mind boggling how much control we've surrendered to algorithms, and the inherent monoculture which all click scoring systems inevitably result in. But I'll leave it at that and get back to the topic at hand before I go on yet another rant about this.

    Originally posted by oiaohm View Post
    Oil and water based inks will not go though wax so its not exactly flowing around the wax it that the wax has water/oil proofed the paper at that point. The excess wax will come off the document once you handle it even so the hot wax straight from a candle percent of it soaked into the paper. So wax soaks into paper like ink so it just a whitish ink that before excess comes off is normally a slightly different colour to the paper so is in fact see-able.
    That's a pretty neat way of inverting handwriting on a paper.

    Originally posted by oiaohm View Post
    The inverted font made for the Gutenberg press disputes that point as it was first made for mass printing accounting documents of the time. There are a few different ways the old documents say to doit.

    The blotter is the expensive version. The cheaper version is write in wax then basically scribble over with your normal quill . This also leads to numbers being written strikethrough being either negative or incorrect. Reason why you need to initial anything your crossed out as wrong in accountancy. Early computer accountancy programs also used either inverted font or strike-though for negative that is just a continuation from 1400 to fairly modern day(yes inside 30 years of current day). .
    To me that makes little sense. Why would accountants use inverted fonts aside from exceptions when printing documents? The extra ink was, and still is, an extra expense no matter how you look at it.

    Using strikethrough is something I've heard of before but as I recall it's because even errors need to be present for the purpose of performing audits. That said I've never done any serious accounting nor delved too deeply into what the formal conventions are.

    Originally posted by oiaohm View Post
    This one you should have done more homework because its a great example of personal bias because you have not done your homework. This is perfect example of Whitelist being in fact wrong now but was kind of right a long time back. The correction there is in fact right but the documentation in Linux kernel if you know what to read.


    Lets start with when it was kind of right to use "use for device whitelisting".


    Code:
    [B]Device Whitelist Controller[/B]
    
    
    [B]1. Description[/B]
    
    Implement a cgroup to track and enforce open and mknod restrictions on device files. A device cgroup associates a device access whitelist with each cgroup. A whitelist entry has 4 fields. ‘type’ is a (all), c (char), or b (block). ‘all’ means it applies to all types and all major and minor numbers. Major and minor are either an integer or * for all. Access is a composition of r (read), w (write), and m (mknod).
    The root device cgroup starts with rwm to ‘all’. A child device cgroup gets a copy of the parent. Administrators can then remove devices from the whitelist or add new entries. A child cgroup can never receive a device access which is denied by its parent.
    
    
    [B]2. User Interface[/B]
    
    An entry is added using devices.allow, and removed using devices.deny. For instance:
    
    echo 'c 1:3 mr' > /sys/fs/cgroup/1/devices.allow
    
    allows cgroup 1 to read and mknod the device usually known as /dev/null. Doing:
    
    echo a > /sys/fs/cgroup/1/devices.deny
    
    will remove the default ‘a [I]:[/I] rwm’ entry. Doing:
    
    echo a > /sys/fs/cgroup/1/devices.allow
    
    will add the ‘a [I]:[/I] rwm’ entry to the whitelist.
    The basic description at the start looks fine do note this is cgroupv1. Do notice you have allow and deny lists inside this so there is already signs that this may not stay a pure whitelist implementation there is one particular problem sentence that is going to trigger the change.
    Code:
    A child cgroup can never receive a device access which is denied by its parent.
    Do you really want to have to list every single device in the system to prevent a container from access only 1 particular device. So this is going to cause a redesign at some point..

    Now lets look at the cgroup v2 where we now find something different that means "use for device access control" is correct and "use for device whitelisting" is now wrong and the weakness has now been corrected.

    Code:
    [B]Device controller[/B]
    
    Device controller manages access to device files. It includes both creation of new device files (using mknod), and access to the existing device files.
    Cgroup v2 device controller has no interface files and is implemented on top of cgroup BPF. To control access to device files, a user may create bpf programs of the BPF_CGROUP_DEVICE type and attach them to cgroups. On an attempt to access a device file, corresponding BPF programs will be executed, and depending on the return value the attempt will succeed or fail with -EPERM.
    A BPF_CGROUP_DEVICE program takes a pointer to the bpf_cgroup_dev_ctx structure, which describes the device access attempt: access type (mknod/read/write) and device (type, major and minor numbers). If the program returns 0, the attempt fails with -EPERM, otherwise it succeeds.
    An example of BPF_CGROUP_DEVICE program may be found in the kernel source tree in the tools/testing/selftests/bpf/dev_cgroup.c file.
    Cgroup v2 in fact allows directly to have functional deny list(blacklist) instead of allow list(whitelist) and in fact allows you to have a filter list(greylist) that is allows asking user if access to X device is allowed by the BPF program interfacing with userspace program.

    So yes if you only support cgroup v1 using the "'devices' - use for device whitelisting" is kind of right. But once you start supporting cgroup v2 it is "device access control" since now you can have fully functional allow/deny/filter lists that you create by converting what ever information there into a BPF program.

    So you have just found example of where whitelist as a term was right in past and due to Linux kernel changes expanding features is now completely wrong. Device filtering in cgroups v2 is no longer limited to whitelist methods. The BPF method also means something can change from allowed to deny to filter on the fly as well.

    So not all the cases of removal of blacklist or whitelist from Linux kernel and supporting programs has anything todo with political correctness sometimes it simply that the feature has expand that those terms don't work at all any more and that is exactly what you just pointed to. Libvirt with qemu talking about cgroup controller around qemu the term whitelist does not fit any more because they are upgrading to support cgroupv2 where things are different.
    But it seems you didn't do your homework here. The syntax for the cgroup ACL comment refers to is still in the form of a whitelist:
    Code:
    # What cgroup controllers to make use of with QEMU guests
    #
    # - 'cpu' - use for scheduler tunables
    # - 'devices' - use for device whitelisting
    # - 'memory' - use for memory tunables
    # - 'blkio' - use for block devices I/O tunables
    # - 'cpuset' - use for CPUs and memory nodes
    # - 'cpuacct' - use for CPUs statistics.
    #
    # NB, even if configured here, they won't be used unless
    # the administrator has mounted cgroups, e.g.:
    #
    # mkdir /dev/cgroup
    # mount -t cgroup -o devices,cpu,memory,blkio,cpuset none /dev/cgroup
    #
    # They can be mounted anywhere, and different controllers
    # can be mounted in different locations. libvirt will detect
    # where they are located.
    #
    #cgroup_controllers = [ "cpu", "devices", "memory", "blkio", "cpuset", "cpuacct" ]
    
    # This is the basic set of devices allowed / required by
    # all virtual machines.
    #
    # As well as this, any configured block backed disks,
    # all sound device, and all PTY devices are allowed.
    #
    # This will only need setting if newer QEMU suddenly
    # wants some device we don't already know about.
    #
    #cgroup_device_acl = [
    # "/dev/null", "/dev/full", "/dev/zero",
    # "/dev/random", "/dev/urandom",
    # "/dev/ptmx", "/dev/kvm"
    #]
    #
    # RDMA migration requires the following extra files to be added to the list:
    # "/dev/infiniband/rdma_cm",
    # "/dev/infiniband/issm0",
    # "/dev/infiniband/issm1",
    # "/dev/infiniband/umad0",
    # "/dev/infiniband/umad1",
    # "/dev/infiniband/uverbs0"
    cgroup_device_acl = [
        "/dev/null", "/dev/full", "/dev/zero",
        "/dev/random", "/dev/urandom",
        "/dev/ptmx", "/dev/kvm",
        "/dev/rtc","/dev/hpet",
        "/dev/input/by-id/usb-Corsair_Corsair_K70R_Gaming_Keyboard-if02-event-kbd",
        "/dev/input/by-id/usb-Logitech_Gaming_Mouse_G600_3AB43C8E80600017-event-mouse",
        "/dev/input/by-id/usb-Logitech_Gaming_Mouse_G600_3AB43C8E80600017-if01-event-kbd"
    ]
    As you see here, the devices specified in the cgroup_device_acl structure are devices which a VM is allowed to access. You may also notice this is a feature which I'm actively using as well. In this particular instance it allows udev to gracefully hand off my mouse and keyboard to a libvirtd-owned instance of QEmu, toggled by a hotkey combination instead of me having to forward the raw USB devices with a spare keyboard/mouse. Very useful for VMs where I use programs requiring low latency raw inputs (SPICE is neat and all but too buggy and jittery for these applications).

    Its use as a whitelist is clarified by another comment right before the structure as well, but the devices controller in the cgroup_controllers structure has no other purpose but to provide the functionality required by the aforementioned whitelist. There is no blacklist because that's the default action for devices not specified in the whitelist.

    The fact that the underlying cgroup implementation changed over the years is, while true, irrelevant to the libvirtd config file syntax.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by Djhg2000 View Post
    (As an aside I don't think master/slave should be capitalized, they're regular words just like any other.)
    To be correct when writing into standard and it a mandatory directive historic documents like X11 standard mandated capitalisation of first letter. Yes it a convention we forget today but that why when I was talking about X11 I typed it that way.

    Originally posted by Djhg2000 View Post
    If it referred to hats then it clearly didn't refer to skin color and as such is categorically not racist, right?
    Exactly the history of the computer origin of blacklist/Whitelist and Greylist that is provable by code comments has no racist link at all.

    Originally posted by Djhg2000 View Post
    At some point you just can't be bothered to click "next page", even though you should in order to find the neutral results with low click scoring.
    Yes true and this lead to debating this stuff without the base knowledge.

    Originally posted by Djhg2000 View Post
    That's actually some neat trivia, I appreciate your dedication to doing things right as a prop person. So the idea was that the ink would flow around the protruding text? I must admit I'm more interested in the actual physical blacklist now than the blacklist/whitelist debate we were having.
    Oil and water based inks will not go though wax so its not exactly flowing around the wax it that the wax has water/oil proofed the paper at that point. The excess wax will come off the document once you handle it even so the hot wax straight from a candle percent of it soaked into the paper. So wax soaks into paper like ink so it just a whitish ink that before excess comes off is normally a slightly different colour to the paper so is in fact see-able.

    Originally posted by Djhg2000 View Post
    Technically correct but no accountant in their right mind would use wax and lots of ink.
    The inverted font made for the Gutenberg press disputes that point as it was first made for mass printing accounting documents of the time. There are a few different ways the old documents say to doit.

    The blotter is the expensive version. The cheaper version is write in wax then basically scribble over with your normal quill . This also leads to numbers being written strikethrough being either negative or incorrect. Reason why you need to initial anything your crossed out as wrong in accountancy. Early computer accountancy programs also used either inverted font or strike-though for negative that is just a continuation from 1400 to fairly modern day(yes inside 30 years of current day). .

    Originally posted by Djhg2000 View Post
    As luck would have it, I came across an example of when this change gets destructive just a few hours ago. Here is the diff for the latest libvirt-daemon-system config update in Debian:
    Code:
    --- /etc/libvirt/qemu.conf 2020-07-13 19:38:22.981996824 +0200
    +++ /etc/libvirt/qemu.conf.dpkg-new 2020-07-27 22:50:08.000000000 +0200
    @@ -464,7 +464,7 @@
    # What cgroup controllers to make use of with QEMU guests
    #
    # - 'cpu' - use for scheduler tunables
    -# - 'devices' - use for device whitelisting
    +# - 'devices' - use for device access control
    # - 'memory' - use for memory tunables
    # - 'blkio' - use for block devices I/O tunables
    # - 'cpuset' - use for CPUs and memory nodes
    Note how it's no longer clear if the devices here are allowed or denied access. It's barely even clear if these devices control the access or the access of the devices are controlled. All of which was summarized with a single word before this change.
    This one you should have done more homework because its a great example of personal bias because you have not done your homework. This is perfect example of Whitelist being in fact wrong now but was kind of right a long time back. The correction there is in fact right but the documentation in Linux kernel if you know what to read.


    Lets start with when it was kind of right to use "use for device whitelisting".


    Code:
    [B]Device Whitelist Controller[/B]
    
    
    [B]1. Description[/B]
    
    Implement a cgroup to track and enforce open and mknod restrictions on device files. A device cgroup associates a device access whitelist with each cgroup. A whitelist entry has 4 fields. ‘type’ is a (all), c (char), or b (block). ‘all’ means it applies to all types and all major and minor numbers. Major and minor are either an integer or * for all. Access is a composition of r (read), w (write), and m (mknod).
    The root device cgroup starts with rwm to ‘all’. A child device cgroup gets a copy of the parent. Administrators can then remove devices from the whitelist or add new entries. A child cgroup can never receive a device access which is denied by its parent.
    
    
    [B]2. User Interface[/B]
    
    An entry is added using devices.allow, and removed using devices.deny. For instance:
    
    echo 'c 1:3 mr' > /sys/fs/cgroup/1/devices.allow
    
    allows cgroup 1 to read and mknod the device usually known as /dev/null. Doing:
    
    echo a > /sys/fs/cgroup/1/devices.deny
    
    will remove the default ‘a [I]:[/I] rwm’ entry. Doing:
    
    echo a > /sys/fs/cgroup/1/devices.allow
    
    will add the ‘a [I]:[/I] rwm’ entry to the whitelist.
    The basic description at the start looks fine do note this is cgroupv1. Do notice you have allow and deny lists inside this so there is already signs that this may not stay a pure whitelist implementation there is one particular problem sentence that is going to trigger the change.
    Code:
    A child cgroup can never receive a device access which is denied by its parent.
    Do you really want to have to list every single device in the system to prevent a container from access only 1 particular device. So this is going to cause a redesign at some point..

    Now lets look at the cgroup v2 where we now find something different that means "use for device access control" is correct and "use for device whitelisting" is now wrong and the weakness has now been corrected.

    Code:
    [B]Device controller[/B]
    
    Device controller manages access to device files. It includes both creation of new device files (using mknod), and access to the existing device files.
    Cgroup v2 device controller has no interface files and is implemented on top of cgroup BPF. To control access to device files, a user may create bpf programs of the BPF_CGROUP_DEVICE type and attach them to cgroups. On an attempt to access a device file, corresponding BPF programs will be executed, and depending on the return value the attempt will succeed or fail with -EPERM.
    A BPF_CGROUP_DEVICE program takes a pointer to the bpf_cgroup_dev_ctx structure, which describes the device access attempt: access type (mknod/read/write) and device (type, major and minor numbers). If the program returns 0, the attempt fails with -EPERM, otherwise it succeeds.
    An example of BPF_CGROUP_DEVICE program may be found in the kernel source tree in the tools/testing/selftests/bpf/dev_cgroup.c file.
    Cgroup v2 in fact allows directly to have functional deny list(blacklist) instead of allow list(whitelist) and in fact allows you to have a filter list(greylist) that is allows asking user if access to X device is allowed by the BPF program interfacing with userspace program.

    So yes if you only support cgroup v1 using the "'devices' - use for device whitelisting" is kind of right. But once you start supporting cgroup v2 it is "device access control" since now you can have fully functional allow/deny/filter lists that you create by converting what ever information there into a BPF program.

    So you have just found example of where whitelist as a term was right in past and due to Linux kernel changes expanding features is now completely wrong. Device filtering in cgroups v2 is no longer limited to whitelist methods. The BPF method also means something can change from allowed to deny to filter on the fly as well.

    So not all the cases of removal of blacklist or whitelist from Linux kernel and supporting programs has anything todo with political correctness sometimes it simply that the feature has expand that those terms don't work at all any more and that is exactly what you just pointed to. Libvirt with qemu talking about cgroup controller around qemu the term whitelist does not fit any more because they are upgrading to support cgroupv2 where things are different.

    Leave a comment:


  • Djhg2000
    replied
    Originally posted by oiaohm View Post
    Over usage also mean we have usage of Master/Slave that when you look closely is either wrong or incorrectly descriptive. Its bit like X11 Client and Server when you look at the fine defines of Client Server X11 uses them inverted in may places.
    I'm trying my best not to sound condescending here, but since when has X11 been a good example of... well anything? X11 is probably the most patched to death software we're still running outside of a certain Redmond product. Inversion of server/client has been a bane of X11 for decades but yet the terminology lives one because everyone has gotten used to it.

    Either way master/slave is a pretty lazy way of doing things to begin with but it has stuck around anyway because it's super simple to understand and no matter which of the terms you come across first in the code it's immediately obvious the other one exists somewhere.

    (As an aside I don't think master/slave should be capitalized, they're regular words just like any other.)

    Originally posted by oiaohm View Post
    Except that is not the origin of the computer world term blacklist. 3 terms end in list enter the computer world at the same time. Blacklist, Greylist and Whitelist. All three enter the computer world language after the blackhat, whitehat and grayhat hacker that comes straight from the Holywood wild west trope even better the first findable usage in source code directly mentions the trope. Before the 1960 we don't have computer networking and we don't have firewalls and computers are not using anything need allow/deny lists.
    Well I guess you got me on that one. But what does that prove towards the topic at hand anyway? If it referred to hats then it clearly didn't refer to skin color and as such is categorically not racist, right? I'd even go as far as to advocating for this to be an explanation of how blacklists and whitelists work in computing, it has an interesting origin story and everything.

    Originally posted by oiaohm View Post
    This is cognitive bias you are only looking for cases where black is negative and white is positive there are inverted examples when you look for them.
    It's actually a mix of cognitive bias and search engine bias. Due to the popularity of this very subject the high relevance search results are all about how white must be some white privilege garbage and black must be oppressing people of color. I need to sift though all of that political crap just to get to anything regarding the symbolism of monochromatic indicators.

    At some point you just can't be bothered to click "next page", even though you should in order to find the neutral results with low click scoring.

    Originally posted by oiaohm View Post
    By the way it was redoing that 1639 play The Unnatural Combat that caused me as a prop person to have to work out how the blacklist in that play should be done to be right. Cost me 18 months of research. Correct answer for that play is the names written on the page in wax then the page covered in ink. Why is this so hard to work out is are you describing the color of the text or the color of the page. So that blacklist by other authors and performers could be called a whitelist because the text is white. When you think about it a guy willing to waste a stack of valuable ink to list he wants revenge on does added to how determined Charles II is and does open up a creative stage play move of having paper written in wax and actor spread ink over the page to make the names appear.
    That's actually some neat trivia, I appreciate your dedication to doing things right as a prop person. So the idea was that the ink would flow around the protruding text? I must admit I'm more interested in the actual physical blacklist now than the blacklist/whitelist debate we were having.

    Originally posted by oiaohm View Post
    Also that is not the first usage of the term blacklist its not even close you are looking for guides to accountancy printed by Johannes Gutenberg some time around 1440 and these are reprints of even older what was hand written documents before that and the different authors had different options way back then what was whitelist and blacklist. These are not to vengeful usages either is how to clearly write a money loss. The Unnatural Combat is the first documented usage of Blacklist in a vengeful way yes this is over 200 years latter than first usage at least.. The printed accountancy guides are suspected to be based on a frew hundred years before their printing.
    Right, so basically every field had their own interpretation of what should be a blacklist and what should be a whitelist. Why should computing not just keep going as usual then? If it's clearly not an anomaly to have to define what a blacklist and a whitelist is, why should we be any different?

    Originally posted by oiaohm View Post
    A list of our official distributors. Our global network of distributors are hand-picked, so that you can buy Phase 3 products securely worldwide.

    Please note black is a ambiguous color as well like it or not. Note how UK old and Australia in history the black was neutral and the red is the one that kills you. Some old Australian AC wiring is black and white guess what inverted to current day USA.

    There is a reason why Neutral is black in old UK and Australian wiring. Single-phase power on a single supply wire run the Neutral goes to earth spike the being black it is the same colour as your negative DC wire that is the same wire you also connect to ground. earth spike on radios.

    Yes USA and Australia can be using the exact same rolls of three phase cable just the ends are wired completely different and we Australians are not wiring with some stupid bias that white has to be good. Yes nice case of arbitrary assignments causing lethality problems. Yes USA system you have a earth spike with stack fo DC items connected with black-wires and someone connects a USA AC black write turns all those devices lethal and does not look out of place because everything is black..

    Yes current Europe & UK has blue as neutral of course they are smart that Line 1 is not black so you cannot have a single phase single wire setup where neutral goes to earth being mixed on a earth spike incorrectly. Yes USA wiring standard placement of white and black in AC stupid and dangerous. Yes black being netural in the Australian one could be question if it should be phase 2/3 as well to make it clearer what earth spikes have AC connected.
    I was hoping you wouldn't put too much weight onto the AC wiring standards because they truly are horrible. "Standards are great, everybody should have one!"

    Where I live the use of black wiring is as ambiguous as can be. The current standard calls for black to be L2, the old one said L1 and before that black was used pretty much randomly; sometimes all three phases were black with labels, sometimes it was the neutral and sometimes it was some home cooked convention (just because it was illegal doesn't mean it didn't happen).

    Originally posted by oiaohm View Post
    Its easy to write white on the white paper by writing using wax then inking around text. Of course candle wax was a really simple item to get and the pad with ink in it is of course your blotting pad for inking your seals this allowed 1 type of ink for everything and you had to buy candles to work by anyhow. Sorry your no practical way is true at all you have limited your method. Yes invisible ink style method where you write what is invisible then make it appear is how it documented as done..
    Technically correct but no accountant in their right mind would use wax and lots of ink.

    Originally posted by oiaohm View Post
    Master/Slave there is room for debate there.
    I think you're going to have a hard time finding terms more descriptive. Even the best I could come up with, controller/node, doesn't make it immediately obvious that a node needs a controller to function. The controller kind of implies it needs something to control but it needs additional context for it to be analogous to master. The controller could just as well be its function in a macroscopic context (like an "air bag controller" in a car). It's also a more complicated word phonetically, which would inevitably hamper its adoption.

    Originally posted by oiaohm View Post
    Blacklist/Whitelist/Greylist in computer usage can be simply replaced with Denylist/Allowlist/Filterlist in all cases I have found. Colors like it or not are arbitrary assignments depend on a person background may have different meanings to what is expected. Yes is like AC electrical wiring when you have devices from all different countries you cannot presume that black is neutral or active just like it does not pay to presume black is bad or good. Yes with AC you call stuff active and netural by names that have meaning not the color because using the color is path to hell.
    Continuing with the phonetics reasoning I think you'd have better luck with something like offlist, onlist and switchlist. I don't think trying to change them out this quickly is going to work anyway when blacklist and whitelist are fairly deeply engraved in the industry but if enough peripheral code start out with it then it could become viable as a change in the kernel a few decades down the line for new code.

    As luck would have it, I came across an example of when this change gets destructive just a few hours ago. Here is the diff for the latest libvirt-daemon-system config update in Debian:
    Code:
    --- /etc/libvirt/qemu.conf 2020-07-13 19:38:22.981996824 +0200
    +++ /etc/libvirt/qemu.conf.dpkg-new 2020-07-27 22:50:08.000000000 +0200
    @@ -464,7 +464,7 @@
    # What cgroup controllers to make use of with QEMU guests
    #
    # - 'cpu' - use for scheduler tunables
    -# - 'devices' - use for device whitelisting
    +# - 'devices' - use for device access control
    # - 'memory' - use for memory tunables
    # - 'blkio' - use for block devices I/O tunables
    # - 'cpuset' - use for CPUs and memory nodes
    Note how it's no longer clear if the devices here are allowed or denied access. It's barely even clear if these devices control the access or the access of the devices are controlled. All of which was summarized with a single word before this change.

    Leave a comment:

Working...
X