Announcement

Collapse
No announcement yet.

FreeBSD Is Pursuing A Compatibility Layer To Make It Easier To Run Linux DRM Drivers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by aht0 View Post
    How can you steal something that is already under permissive license in the first place? Explain me the fcuking logic behind such reasoning.
    Nono, not saying that. I'm before that, before the code has any license and isn't yet published.

    I'm looking at the matter from the dev's point of view, since you are saying it frustrates devs, I explain why (many) other devs disagree.

    Let's say I want to make something opensource because <random reason>, but I don't like to just get my stuff stolen for lulz because I think it is not good for my project, the world at large or whatever.

    Can I use a permissive license for my work? No.

    Will GPL do what is reasonably possible to protect my will? Yes.

    All this shitstorm GPL vs permissive is nonsense, GPL allows people that want to opensource but without letting others steal the work, end of story.

    If you think it is useful or necessary (imho yes it is) then GPL makes sense, if you don't need it then it does not. There is no "winner", the two licenses cater to different userbases.

    Comment


    • #32
      With that reasoning I can agree with. Since it's your creation.

      Comment


      • #33
        Originally posted by aht0 View Post
        How can you steal something that is already under permissive license in the first place? Explain me the fcuking logic behind such reasoning.
        But would it be okay when someome would grab all the basket, denying this ability to humans around, and then selling yor bread? Without paying anything to you.

        While you may or may not have something against this kind of attitude, that's how BSD ecosystem performs and it mostly explains underdeveloped state of BSD systems. They've started EARLIER than Linux, they had like 10 extra years to write the code, etc. But many companies never bothered to contribute back. And only few humans are ok with putting basket full of bread just to see some greedy spiggot grabbing it, showing middle finger to others and selling bread pretending they've baked it. Somehow this approach does not works well for large projects where no single human or company could claim they've created the entire thing from scratch. From dev's perspective it is pretty much like ripoff and most devs are smart enough to get the idea.
        Last edited by SystemCrasher; 03 June 2016, 10:11 PM.

        Comment


        • #34
          Originally posted by SystemCrasher View Post
          But would it be okay when someome would grab all the basket, denying this ability to humans around, and then selling yor bread? Without paying anything to you.

          And only few humans are ok with putting basket full of bread just to see some greedy spiggot grabbing it, showing middle finger to others and selling bread pretending they've baked it. Somehow this approach does not works well for large projects where no single human or company could claim they've created the entire thing from scratch. From dev's perspective it is pretty much like ripoff and most devs are smart enough to get the idea.
          If he/she grabs the whole basket, so be it. Maybe there is a family couple of streets ahead who needs it as much as single passing individuals. No, you automatically assume grabber wants to sell the basket for profit.

          Why this "paying me" comes up over and over? I already stated I would not want it. All that great talk about openness and open source, benefits etc and ALL you can think in the end is coming down to simple plain greed.

          Originally posted by SystemCrasher View Post
          While you may or may not have something against this kind of attitude, that's how BSD ecosystem performs and it mostly explains underdeveloped state of BSD systems. They've started EARLIER than Linux, they had like 10 extra years to write the code, etc. But many companies never bothered to contribute back. And only few humans are ok with putting basket full of bread just to see some greedy spiggot grabbing it, showing middle finger to others and selling bread pretending they've baked it. Somehow this approach does not works well for large projects where no single human or company could claim they've created the entire thing from scratch. From dev's perspective it is pretty much like ripoff and most devs are smart enough to get the idea.
          Imagine that. Underdeveloped. And still software from "underdeveloped BSD systems" keeps trickling into "developed-Linux systems". How come?. You screw something utterly up, then it's up to "underdeveloped BSD devs" to fix your messes. Like OpenSSL->LibreSSL. I don't mention plethora of other software that has migrated into Linux from those "underdeveloped BSD-systems".

          Comparing the money behind development, Linux is under-developed and grossly inefficient. Which is predictable following the logic of it's development. Why? It follows below. First the logic of conclusion. Half-baked solutions that are endlessly reworked, ready solutions that get suddenly deprecated because devs wanted to start doing same thing under new angle..kernel features that come and then disappear few iterations later.. et cetera ad infinitum..

          Linux has seen literally billions of dollars worth of active support from bunch of companies that based their business on it. BSD's have done nearly as much with trickle of it at the same time contributing free code to everyone - including Linux. Not complaining about "who pays me"..

          Originally posted by SystemCrasher View Post
          While you may or may not have something against this kind of attitude, that's how BSD ecosystem performs and it mostly explains underdeveloped state of BSD systems. They've started EARLIER than Linux, they had like 10 extra years to write the code, etc. But many companies never bothered to contribute back.
          Questions:
          -Are you simply unable to process sentences I've wrote?
          -Or are you so stick in Your Version Of Truth that small things like HISTORICAL FACTS are just there, planted by BSD Devil to err you on your quest to Greatness - and your DUTY to GNU is to ignore such thing for all cost?

          I think it's fourth time to state it. I do it as shortly as possible, maybe it's easier for you to comprehend.

          - Linux started (initial release) on October 5th, 1991
          - FreeBSD started November 1, 1993
          - even if you count in long-defunct 386BSD, it was first released March 12 1992.

          Torvalds has himself said that if it there had been something like BSD available for him he would not have started with Linux at all. Btw, it did not stop him from "stealing" (your term) from BSD code, quite a few times also without giving credit, opposite to what license asked.

          Comment


          • #35
            Originally posted by aht0 View Post
            Atheros, Ralink, Marvell (some of them), Lucent Hermes/Intersil Prism WLAN cards should have capability to do AP. Intel cards cannot. For mesh networking Atheros, Marvell, Ralink (some USB 802.11n models?). Also Intel cannot.
            There are ways to create mesh e.g. by using ad-hoc, but whatever, Intel is crippled.

            consider laptop CPUs for example....
            Laptop CPU is orders of magnitude more powerful. Small IC without heatsink can't be powerful for obvious reasons. Then recent CPUs got HW AES, etc. So it unconvicing. IMHO it is easier: within last ~10 years Intel are just bunch of treacherous DRM fucks.

            Nothing stops you from reading EPROM/EEPROM chips
            Sure. Just pointless for me.

            Do it once and you never have to worry about patching it. Otherwise you'd have to patch it over and over again as you reinstall OS or install new kernels..
            EEPROM patching is of little help for adventures in freqland, etc.

            I sort of expected "I am using Samsung S3 running Replicant 4.2"
            I dislike Android. Extremely troublesome system with useless apps. No android app could beat old good xchat or xterm I have on N900. I could even use real midnight commander. Both locally and via SSH, etc. Then I could do very adanced networking, equal to that of any Linux computer around. Android could somewhat do it, but its awkward. Say, I've created upstart config which sets me random wireless mac on each boot. I doubt I could do it on Android, especially reusing knowledge of desktop *buntu/debian. Since I do not like phone calls either, it more like ultra-light mobile computer & networking device to me. Still doing shitload of useful things for me, ranging from taking photos to advanced navigation using OpenStreetMaps.

            what about blobs in N900? 3D accel, GPS, Camera, WLAN, BT, Radio - are usually ones lacking OSS drivers for phones. Mostly binaries
            Camera does not needs blobs & supported in mainline. WLAN driver is opensource "WL1251" as well (needs firmware though) and it is nearly first mobile device where it was real MAC80211, by Kalle Valo, one of key persons around Linux wireless. So it does the best one expects from MAC80211 Linux driver, ranging from honoring CRDA (which brings good and bad sides, some are funny) to being able to do Monitor mode, not to mention it implies it could create another IF on same PHY, running "normal" and "monitor" IFaces at once, like any other decent Linux system. Actually most of things are supported by open drivers. It is not perfect, worst one is PowerVR, sure. But I haven't spotted something better than that. At least I understand architecture of this thing and could keep it under control. I have rather good control, being able to specify charging current of battery or take an emergency override of low-voltage, so I can take over in some "unusual" cases where I may need e.g. few moments of run time no matter what cost or want to use unusual/troublesome charging source which can't full charging current of 1 amp (like 4 x alkaline batteries).

            Lol, Nvidia should drop supporting linux PC market for the next 5 years. Including locking users out of 3rd party drivers. Just to show what "uncooperative nature" looks like :P
            I do not care. Ones who would suffer are nvidia customers. I'm not one of them, most Linux devs neither. Furthermore, it could be brilliant showcase of vendorlocks and proprietary pests. So maybe it is a good idea XD.

            You'd see desktop-linux usage plummeting down and since majority of users have Nvidia in their machines. Lots and lots of Linux users have dual-boot systems and are using Windows as well. You, see it's reason Linux needed LiLo and GRUB. To boot into multiple OSes.
            That's what I call wishful thinking of BSD users, lol. Linux on its own wouldn't lose anything. Nobody ever asked nvidia or their stupid consumers to use Linux, these were their decisions. It does not implies Linux devs or somebody would do something to please these nuts. They are on their own, Linux kernel devs would not help to debug issues with tainted kernels, etc. They are not part of the process.

            BSD's have never had real need for such multiple-boot capability though you can chainload Windows trough FreeBSD's native boot loader if you wish to.
            Speaking for myself, grub is only useful for me since it could read me the kernel from advanced filesystems like btrfs, no special boot partitions needed. As well as ability to boot various kernel versions and integration with OS. Once I install kernel package I've build it appears in grub and even boots by default if being most recent kernel around.

            Oh, Nvidia could. Lock out unsigned 3rd party drivers, no releasing of binary drivers.. Welcome back to 1999 when getting GPU to work on Linux was pain in the butt.
            IMHO it would be rather stupid action of nvidia, since dGPUs market is shrinking due to iGPUs, emerging markets like HPC & mobile devices are all about Linux, and they were utterly pwnzored on x86 market. The only thing with iGPU they have is ARM, ironically it could only run Linux, windows does not supports it. Furthermore, if someone has got short memory, first incarnation of Tegra ICs relied on Windows support, they've thought Zunes are way to go. It has been laughable FAIL, of course. So nvidia has been forced to reconsider it.

            Nvidia and it's customers have a LOT to lose from niche desktop-OS /sarcasm.
            Nvidia couldn't stop global processes like integration. It means over time, it will be more like SoC with iGPU on same die or multi-chip assembly. Ironically nvidia can't implement x86 SoC and has been explicitly denied x86 techs by Intel and AMD. Then both Intel and AMD got APU/iGPUs and pushing hard to improve them. Get the idea.

            AMD could be bankrupt in next 2 years. It's already known that Polaris can't beat Pascal in performance. AMD is talking mostly about energy efficiency. If Zen should also fail, AMD is done. Preliminary articles promise performance akin to Intel's faster CPU's but not exceeding them.
            I think I've already heard this like 20 years or so.

            Supercomputers or super-clusters of disctinct "component" computers?
            Of course they are bunch of distinct nodes. The only way to get THIS level of performance. No single system could scale so wildly. It works, its the only thing that matters. HPC is fairly large emerging market where high-end GPUs are in demand.

            Put Linux on 32-way hardware and see it fail. Hell, it's showing it's failures on PC right now, just watch the massive uproar about linux scheduler issues in net. "decade of wasted cores"
            Sure, it is easy to pick up some synthetic test case to prove something suxx. Ironically same could be done for BSDs and everything else around. Btw, isn't it funny to blame Linux who took over >95% of supercomputers TOP500 list while BSDs are missing in this list? Have you ever heard about double standards? XD

            Linux desktop market share is under 2%. Insignificant. Majority of Nvidia's profits are coming from enthusiast GPU (gaming) market : from Windows users.
            Mobile and enterprise markets are much smaller, mobile in itself is more like an afterthought, not serious market for Nvidia.
            You can't prevent future. Integration would go on, CPUs would meet GPUs on same packages & dies. Intel and AMD are ok with it. Nvidia has only got Tegras. Inherently doomed to run Linux. Just because MS would not do.

            Build something on your own?
            I would let proprietary morons to reinvent the wheel each and every time they're building their cars. Hopefully it gives idea why BSDs almost vanished in embedded and underdeveloped overall, especially when it comes to SoCs. I think there're better options, they are working for me.

            For example. You work for the construction company. It provides you with the work place and assets. You get paid in return for your effort[....]
            Digital things which are inherently "copy-paste" so it does not works like this. Design happens once, then it is copy-paste kind of work. Approximated example would be: you've got load of nanites. They could reshape nearby matter into whatever shape you want, free of charge. Say, you want it to be building. Everything revolves around creation of building's model uploaded to nanites who would implement it. Similar buildings could be erected indefinitely, using same model. Should one who has created model be paid for erecting each and every building? Job of creation of model has been done once. Sounds strange? Lol, take a look on CNC, 3D printing, PCB manufacturing ... get the idea.

            Draw parallel with software company. It offers you a job, means to do your job with and pays you for your effort. Do you think you still have rights for the code you produce if it was really your work in the first place to produce it and you already got paid for it? Same with the companies that provide hardware. Ok, it's usually settled straight in working contract. At the moment though, I am speaking about moral right.
            Been there, done that. It suxx for devs, unpleasant, outdated and makes things much more complicated than it could be. After stumbling on some opensource teams and trying various ways I've got idea there're options which are working better for me, so I do not have to agree on crappy terms anymore. Let greediest of corps to rip off someone else :P.

            GPL is totally fair only from the end-user point of view, since he/she doesn't have to do jack sh1t anyway.
            GPL is also good for devs, nobody would takeover ownership of dev's code. It also fair if project is larger than single entity could create from scratch. Yelling about ownership if you haven't created it as whole is a misnomer. Sorry, this kind of treachery only works on nuts.

            He/she is solely the consumer. Dev has to think on GPL imposed limitations of the license every single time he/she wants to distribute it.
            Sure, it brings limitations. But on other hand, I gain a lot from my ability to base my works on large chunks of code created by thousands of devs. Sharing 10, 100 or even 1000 lines of code isn't a big deal compared to this. Not to mention porting BSD myself going to be much harder and I only have limited resources. Then, sharing brings benefits of offloading jobs and reducing maintenance efforts. May or may not happen, but good idea overall.

            How's that "fair" or "free"? One does not have to do anything than consume what you create, you have to jump trough loops to make him/her and license happy. No copyleft license is thus "fair" or really "free".
            Since nobody is author of e.g. Linux as whole, it is really fair everyone gets equal set of rights and is prohibited from denying these rights to others. Why someone have to be second-class citizen? In no way these offenders have created Linux alone. Not even Torvalds could claim that.

            How can something that dictates on it's rules on you be free?
            Freedom of one being ends where freedom of another being starts. Something proprietary DRM spiggots fail to understand all the time. They are so eager excersizing THEIR stinky rights at cost of everyone's else rights, then they are surprised to learn there is strong demand for better terms. That's why proprietary BSDs were offset by opensource Linux systems in embedded. And honestly, it is much more healthy & cooperative ecosystem. Good riddance.

            I'm sure the code that handles RHEL licensing and subscriptions is also all fully open and available for everyone?
            Yes. You could grab RHEL source. But since RedHat is a registered trademark, you should not mention it. That's how/why CentOS appeared: CentOS is a RHEL where RH trademarks are removed. Not to mention RH is a major contributor to mainline kernel, glibc and many other things.

            When was last time proprietary software like Windows bricked hardware like Linux on such a massive scale..
            Haven't bricked even single device for me. Not to mention Windows is unable to run on my ARM boards, MIPS routers and so on at all. It just not going to take off and cope with my tasks. So I do not need or want this system at all. Especially at their awful mix of price and EULA terms.

            and it wasn't even considered "serious issue" by authors of the code. What I should think of the systemd dev's opinion that it's just "one of the many ways one could destroy their system"?
            Let's remind, UEFI crap died in a numbers on reinstall of ANY OS, windows included. Firmware vars are tricky things, some vendors just can't get it right. Technically, it is firmware defect, but it seems BSD zealots have got used to BIOS/UEFI shit. But it is hard to praise BIOS or UEFI after trying e.g. u-boot, where bug-free, crap-free boot loader only does what I really need, not to mention I could fix it since I have source. If you think "opensource == shit", do not use it, enjoy your blobs. I could see it is BIOS/UEFI what is bugged, not Linux itself. So from purely technical standpoint, systemd authors are correct.

            If it's not careless, ir-responsible, malicious and arrogant "we know it better" attitude, then what is? Yeah, we got your 200-300€ motherboard bricked, so what? You could have done it in many other ways.
            Sure, proprietary firmware vendors are careless, malicious and ignorant: they expose UEFI vars to outer world, but unable to handle it when software is actually fiddling with them. Don't you mind it is allowed by UEFI specs and so on? So if someone has released bugged firmware, they have to and fix their shit. But proprietary SW vendors and their fans are all like this and would rather blame everyone around instead of fixing their bugged shit. Then they are surprised to learn some ppl are not exactly fond of this approach.

            OTOH it should give you a decent idea why my devices are using opensource u-boot. It is free of this utter proprietary BS and does not bricks device, even if someone is fiddling with its vars. I really think BIOS & UEFI are worst parts of PC things alltogether. Well, its wintel and proprietary vendors. So one do not have to expect much.

            Customers have numerous reasons to also like proprietary software. My country tried to use FOSS (estobuntu linux more precisely) for municipal and government structures in past. It was failure. I think, only thing that sort of stuck was OpenOffice and even that is slowly being replaced with MS Office again. In the end, FOSS was deemed more expensive to use than to continue using Windows in workstations.
            NP, let 'em enjoy by vendor locks, draconian eulas and spyware, though I'm not sure how exactly it could be "success". For me it seems like your company has got incompetent or proprietary minded staff. Looking on you I'm pretty sure about it.

            What about the firmware in your PC's hard drives? :P
            It is bad, but could be tamed, Turing is to blame. I.e. firmware of PC hard drive can't reliably analyze arbitrary thing I'm going to launch. Since Linux changes quite often, reliably pwnzoring it could be an issue, especially without frequent firmware updates. Not to mention fairly complicated boot loader and e.g. filesystem could use checksums or even stronger crypto to make it tricky. Still could be hijacked but could also be countered in way hijacking no longer works. It is a no-win situation for either side. So, generally it is good to correct this problem, but not a topmost priority.

            Btw, ever heard of digital warfare software created by "Equatoin Group"? It actually patches HDD firmware. Though it only able to infect selected HDD models and Windows. Even this limited pwnage surely took a great work, state of art.

            You create something, you inherently own it. You decide what you want to do with it. Not the prospective user.
            You haven't created Linux (or BSD) so these mumblings are misnomer, to begin with. Sure, if you'll write your very own OS it is perfectly okay to close its source or whatever (and good luck in sales, lol).

            Or you would prefer the system where you build something (like house) and your neighbor claims it as his birthright? Or the producer of building materials comes around claiming that since you used his materials building your house, it also belongs to him/her/it.
            I prefer to achieve my goals which have nothing to do with stupid greed or excersizing ownership rights to counterproductive and harmful extent. Let's be honest, I haven't created Linux and user mode around. I've took great work of great humans and based on it, achieving my custom goals, which were not foreseen by these humans. Surely, I write some code in the process, but it is nowhere close to writing something like Linux.

            If you don't like it, don't use it. And, FOSS BSD's came around 1 year AFTER Linux. Gave you already one lesson of history. Read it again.
            That's why I do not use BSDs, lol. As for lesson of history, BSDs were here 10 years earlier. The fact their project management sucked and it took them like 10 years and AT&T lawsuit to get the idea is hardly an excuse. Its rather showcase of crappy BSD project management. BSDs always sucked at it.

            Btw, Linus himself has said that if he had had something like BSD that worked on i386, he would never have started with Linux. He did it because he had no alternative and could not use Minix where he wanted it.
            Exactly, but since BSDs sucked at project management, it wasn't a case. BSDs had each and every chance to be in place of Linux, be it 10 years

            Planning to keep it to the autumn, will see how the Zen pans out. If it sucks I'm going for Intel Xeon workstation hw. More reliable than consumer hw.
            My AMD FX system uses ECC RAM, Linux recognizes it, etc. FX could do that. It is not widely documented, but unbuffered ECC usually works. At least on FX series (and maybe other AM3 AMD CPUs) DRAM controller is ECC-capable and ECC lines are usually routed on most MBs. Have fun mumbling about reliability while I'm actually enjoying it for years. AMD also allowed it to be fairy cheap, unlike xeons. Too bad my intel laptop is nowhere like this but apparently affected by "rowhammer".

            Count the dedicated processors and binary blobs running in it. Raid controller has 800MHz PowerPC running onboard it for example. Hope you won't get heart attack. It does not bother me in the least. Why should it?
            Still, speaking for myself I would prefer to get rid of blobs. They are most troublesome part of system, nobody is going to fix their bugs, and I never know where this shit would backstab or just fail me. With no good options to do something about it.

            Now show me blobless FOSS alternative with equal or better performance and capabilities.
            And why I have to? Are you my customer? Have you paid me? Or stupid consumer things I'm gonna to compete in amount of consumed crap? You've got it wrong, dude.

            You know, majority of people does not simply care[...]
            Some people are even dumbass enough to use Facebook, despite the very direct claim by Mark Zuckerberg himself they are DUMB FUCKS. So yeah, dumb fucks are dumb. What do you expect from me? To be like them? Nope, not going to happen. I do not consider it is something good either. Stupid ppl are bad thing. I'm better off with these "lunatics" just because they do not look like a bunch of hopeless idiots.

            Comment


            • #36
              Originally posted by aht0 View Post
              If he/she grabs the whole basket, so be it. Maybe there is a family couple of streets ahead who needs it as much as single passing individuals.
              Sure, Sony & somesuch got really large family, blah-doh

              No, you automatically assume grabber wants to sell the basket for profit.
              If I see bunch of these grabbers selling "free" bread, which isn't free anymore, what I'm supposed to think? At the end of day such activity helps to raise "bread mafia" which is struggling hard to grab all the baskets before others do to ensure others have to buy "their" bread on crappy terms. That's how BSD ecosystem works.

              Why this "paying me" comes up over and over? I already stated I would not want it. All that great talk about openness and open source, benefits etc and ALL you can think in the end is coming down to simple plain greed.
              Plain greed is IMHO when one takes opensource thing, closes source, puts DRM and pretends it is "their" product. To make it even more fun, they do not give a fuck what would happen with the guy who brings all these baskets. So this guy eventually starves, getting ill or something and nobody gives a fuck. Most companies do not bother self what would happen to upstream. Just because license allows it. That's what I call greed and ignorance, hallmarks of proprietary devs & corps.

              Imagine that. Underdeveloped. And still software from "underdeveloped BSD systems" keeps trickling into "developed-Linux systems". How come? You screw something utterly up, then it's up to "underdeveloped BSD devs" to fix your messes. Like OpenSSL->LibreSSL.
              Yes, they are underdeveloped, since I could achive my goals using Linux, but it not going to work using BSDs. They do not support HW I'm using. No, I'm not going to use trashbin laptops "solutions".

              I don't mention plethora of other software that has migrated into Linux from those "underdeveloped BSD-systems".
              OpenSSL appeared on its own, it got nothing to do with BSDs. LibreSSL is just some unpopular fork. I'm yet to see its real world use. OpenBSD isn't bad at security, but they suck at project management. I'm not a big fan of neither SSL nor SSH as well. Both are bad in doing their primary jobs, while bringing ton of bells and whistles together with legacy cruft and plenty of long-standing technical issues.

              Comparing the money behind development, Linux is under-developed and grossly inefficient.[...]
              You see, when it comes to investments, only few entities are in mood to invest into poorly managed trashbins like BSDs. They want some bang per buck, BSDs proven to be bad at it multiple times.

              Linux has seen literally billions of dollars worth of active support from bunch of companies that based their business on it. BSD's have done nearly as much with trickle of it at the same time contributing free code to everyone - including Linux. Not complaining about "who pays me"..
              Isn't it funny BSDs had each and every chance to get these billions? Yet they were managed so badly only few entities were in mood to give them some bucks, even "permissive" license haven't helped too much.

              Questions:
              -Are you simply unable to process sentences I've wrote?
              -Or are you so stick in Your Version Of Truth that small things like HISTORICAL FACTS are just there, planted by BSD Devil to err you on your quest to Greatness - and your DUTY to GNU is to ignore such thing for all cost?
              It seems it is you who got stuck to own version of the truth. Historical facts tell us BSD systems were like 10 years before Linux appered, even if they were not targeting 386. Because BSD devs are morons when it comes to project management, getting idea what FOSS and copyright is took them whole AT&T lawsuit, and then they also lagged behind when it comes to targeting i386 instead of "better" architectures. Whuch were eventually smashed by x86. So much of project management excellence.

              - Linux started (initial release) on October 5th, 1991
              - FreeBSD started November 1, 1993
              - even if you count in long-defunct 386BSD, it was first released March 12 1992.
              Neither FreeBSD nor 386BSD have started from scratch, they were reusing code from other BSDs, even if these haven't targeted x86. OTOH Linux kernel has been completely from scratch. Sure, it used GNU tools & toolchains. But GNU ecosystem lacked kernel, hurd just as bad at project management and decision making as BSDs, or even worse.

              Torvalds has himself said that if it there had been something like BSD available for him he would not have started with Linux at all. Btw, it did not stop him from "stealing" (your term) from BSD code, quite a few times also without giving credit, opposite to what license asked.
              Yeah, but since BSDs suck at project management (unlike Torvalds) they've been too slow to target x86 and get idea about FOSS. So some random finnish student has shown arrogant berkeley nuts how to get it right. Funny.

              Comment


              • #37


                Originally posted by SystemCrasher View Post
                Laptop CPU is orders of magnitude more powerful. Small IC without heatsink can't be powerful for obvious reasons. Then recent CPUs got HW AES, etc. So it unconvicing. IMHO it is easier: within last ~10 years Intel are just bunch of treacherous DRM fucks.
                Funny that you claim it.
                AMD Bobcat (released 2011) and prior lack AES. AES appeared in AMD mobile CPU's with the Jaguar. I can still find netbooks with Bobcat on-board in retail stores. In fact, I also happen to own one.

                From Intel camp, look for example mobile Pentium's. Bay Trail-M for example. Even more recent than AMDs (ca 2014 if my memory serves me right)

                It'd be interesting to see how such processors handle wifi traffic riding all-over their CPU.. So, having dedicated chip on wlan card doing most of the heavy-lifting becomes suddenly logical.

                Originally posted by SystemCrasher View Post
                EEPROM patching is of little help for adventures in freqland, etc.
                Wrong.

                Originally posted by SystemCrasher View Post
                I dislike Android. Extremely troublesome system with useless apps. No android app could beat old good xchat or xterm I have on N900. I could even use real midnight commander. Both locally and via SSH, etc. Then I could do very adanced networking, equal to that of any Linux computer around. Android could somewhat do it, but its awkward. Say, I've created upstart config which sets me random wireless mac on each boot. I doubt I could do it on Android, especially reusing knowledge of desktop *buntu/debian. Since I do not like phone calls either, it more like ultra-light mobile computer &amp; networking device to me. Still doing shitload of useful things for me, ranging from taking photos to advanced navigation using OpenStreetMaps.
                "Normal" Android apps do not work on Replicant. FOSS libraries in it make it binary-incompatible with bionic-based "real" Android. There are custom FOSS repos for Replicant (F-Droid for example)

                Norton Commander, xchat, xterm? In one thread you scoff at ancient software then now they have become somehow "superior". Make up your mind or sort out bias.

                Generic terminal, ssh, total commander are available for everyone with Replicant, even more so for Android.

                Anyway. Phone is for calling. Period. It's a tool, not a toy. Not impressed, more like disgusted - like weird fetish.. Do "your advanced networking" on computer which is meant for it.

                Originally posted by SystemCrasher View Post
                That's what I call wishful thinking of BSD users, lol. Linux on its own wouldn't lose anything. Nobody ever asked nvidia or their stupid consumers to use Linux, these were their decisions. It does not implies Linux devs or somebody would do something to please these nuts. They are on their own, Linux kernel devs would not help to debug issues with tainted kernels, etc. They are not part of the process.
                Hell of an ego there. Calling 82% of enthusiast users "stupid" and nuts.

                Originally posted by SystemCrasher View Post
                Speaking for myself, grub is only useful for me since it could read me the kernel from advanced filesystems like btrfs, no special boot partitions needed. As well as ability to boot various kernel versions and integration with OS. Once I install kernel package I've build it appears in grub and even boots by default if being most recent kernel around.
                Speaking for myself.I don't need GRUB at all. It was just to make argument. It was developed by Linux-crowd - there had to be obvious need for dual-boot.

                Btrfs becomes advanced when devs manage to finally finish it.

                Originally posted by SystemCrasher View Post
                IMHO it would be rather stupid action of nvidia, since dGPUs market is shrinking due to iGPUs, emerging markets like HPC &amp; mobile devices are all about Linux, and they were utterly pwnzored on x86 market. The only thing with iGPU they have is ARM, ironically it could only run Linux, windows does not supports it. Furthermore, if someone has got short memory, first incarnation of Tegra ICs relied on Windows support, they've thought Zunes are way to go. It has been laughable FAIL, of course. So nvidia has been forced to reconsider it.
                pwnzored - teenage l33t h2xx0r t2lk?

                Wishful thinking. Nvidia announced record revenues for Q3 FY2016. I bet it's doing even more money now with Pascal.

                Originally posted by SystemCrasher View Post
                Nvidia couldn't stop global processes like integration. It means over time, it will be more like SoC with iGPU on same die or multi-chip assembly. Ironically nvidia can't implement x86 SoC and has been explicitly denied x86 techs by Intel and AMD. Then both Intel and AMD got APU/iGPUs and pushing hard to improve them. Get the idea.
                Intel's iGPU performance is abysmal. NOBODY buys Intel's CPU's for their iGPU's - more often you meet grumbling that instead this "wasted space" there should be more cache or few more cores. Yeah, iGPU exists, it's convenient for cheap "office" computer. Browse net, use spreadsheets. That's the end of it.

                About the same for AMD's iGPU's. While they perform better, nobody who needs performance, uses one. He gets dGPU instead. With the 82% probability from nVidia.

                Originally posted by SystemCrasher View Post
                Of course they are bunch of distinct nodes. The only way to get THIS level of performance. No single system could scale so wildly. It works, its the only thing that matters. HPC is fairly large emerging market where high-end GPUs are in demand.
                Sure, it is easy to pick up some synthetic test case to prove something suxx. Ironically same could be done for BSDs and everything else around. Btw, isn't it funny to blame Linux who took over &gt;95% of supercomputers TOP500 list while BSDs are missing in this list? Have you ever heard about double standards? XD
                It doesn't even need synthetic tests. Linux database server crashing on regular basis on big system while Solaris next to it on identical hardware hums away is good indicator enough..

                Originally posted by SystemCrasher View Post
                You can't prevent future. Integration would go on, CPUs would meet GPUs on same packages &amp; dies. Intel and AMD are ok with it. Nvidia has only got Tegras. Inherently doomed to run Linux. Just because MS would not do.
                And Tegra's btw tend to beat the hell out of their competing chips.. AFAIK Nvidia does not care about mobile market about more than to have simple presence. It's revenue sources are elsewhere.

                Originally posted by SystemCrasher View Post
                I would let proprietary morons to reinvent the wheel each and every time they're building their cars. Hopefully it gives idea why BSDs almost vanished in embedded and underdeveloped overall, especially when it comes to SoCs. I think there're better options, they are working for me.
                I fail again to follow your rambling logic, how you are managing to connect "proprietary morons" and BSD - which is as free software as it could get. Certainly more so than Linux.

                Originally posted by SystemCrasher View Post
                Digital things which are inherently "copy-paste" so it does not works like this. Design happens once, then it is copy-paste kind of work. Approximated example would be: you've got load of nanites. They could reshape nearby matter into whatever shape you want, free of charge. Say, you want it to be building. Everything revolves around creation of building's model uploaded to nanites who would implement it. Similar buildings could be erected indefinitely, using same model. Should one who has created model be paid for erecting each and every building? Job of creation of model has been done once. Sounds strange? Lol, take a look on CNC, 3D printing, PCB manufacturing ... get the idea.
                You are free to create your own designs if you do not like the conditions of the author(s). Or use what's available for free. Getting mad at him/ her/them and calling them "retarded", "proprietary morons", "idiots" etc is just low.

                Originally posted by SystemCrasher View Post
                Freedom of one being ends where freedom of another being starts. Something proprietary DRM spiggots fail to understand all the time. They are so eager excersizing THEIR stinky rights at cost of everyone's else rights, then they are surprised to learn there is strong demand for better terms. That's why proprietary BSDs were offset by opensource Linux systems in embedded. And honestly, it is much more healthy &amp; cooperative ecosystem. Good riddance.
                proprietary BSD's... LOL. it sounds like pig with wings..

                As you are so eager to exercise your rights..often rights that only you imagine exist.. what makes you differ them? Kettle insulting cooking pot.

                Out of curiosity. I want real life example for "They are so eager excersizing THEIR stinky rights at cost of everyone's else rights"...

                Originally posted by SystemCrasher View Post
                Haven't bricked even single device for me. Not to mention Windows is unable to run on my ARM boards, MIPS routers and so on at all. It just not going to take off and cope with my tasks. So I do not need or want this system at all. Especially at their awful mix of price and EULA terms.
                You are not the Universe. There were mass of people who killed their UEFI motherboards using linux and "rm" command. Because systemd devs thought that having certain variables r/w is bright idea..

                Originally posted by SystemCrasher View Post
                Sure, proprietary firmware vendors are careless, malicious and ignorant: they expose UEFI vars to outer world, but unable to handle it when software is actually fiddling with them. Don't you mind it is allowed by UEFI specs and so on? So if someone has released bugged firmware, they have to and fix their shit. But proprietary SW vendors and their fans are all like this and would rather blame everyone around instead of fixing their bugged shit. Then they are surprised to learn some ppl are not exactly fond of this approach.
                oh sure, it's manufacturers fault now.. classic. All other OS'es could handle it without issues.

                [QUOTE=SystemCrasher;n876327]
                NP, let 'em enjoy by vendor locks, draconian eulas and spyware, though I'm not sure how exactly it could be "success". For me it seems like your company has got incompetent or proprietary minded staff. Looking on you I'm pretty sure about it.
                [quote]
                Linux's "success" is becoming like Windows. What are you going to do then..? :P

                Originally posted by SystemCrasher View Post
                Btw, ever heard of digital warfare software created by "Equatoin Group"? It actually patches HDD firmware. Though it only able to infect selected HDD models and Windows. Even this limited pwnage surely took a great work, state of art.
                Gonna check it out. Thanks for the tip.

                Originally posted by SystemCrasher View Post
                That's why I do not use BSDs, lol. As for lesson of history, BSDs were here 10 years earlier. The fact their project management sucked and it took them like 10 years and AT&amp;T lawsuit to get the idea is hardly an excuse. Its rather showcase of crappy BSD project management. BSDs always sucked at it.
                Exactly, but since BSDs sucked at project management, it wasn't a case. BSDs had each and every chance to be in place of Linux, be it 10 years
                Wrong. Linux started year earlier than 386BSD and 2 years earlier than FreeBSD. Their original AT&T Unix code had to be cleaned out, so it was literally written from scratch.

                Wanna now finally explain me, how you reached that "10 years earlier"?

                Originally posted by SystemCrasher View Post
                My AMD FX system uses ECC RAM, Linux recognizes it, etc. FX could do that. It is not widely documented, but unbuffered ECC usually works. At least on FX series (and maybe other AM3 AMD CPUs) DRAM controller is ECC-capable and ECC lines are usually routed on most MBs. Have fun mumbling about reliability while I'm actually enjoying it for years. AMD also allowed it to be fairy cheap, unlike xeons. Too bad my intel laptop is nowhere like this but apparently affected by "rowhammer".
                You think memory is weak spot on a AM3+ system with a FX processor? Wrong. It's voltage regulator modules. It took Gigabyte years (and half dozen revisions in some cases for it's motherboards) to get their engineering right so that 125W TDP FX would not go into thermal throttling under heavy load or worse, damage the motherboard. It was especially bad with 125W FX'es. Other manufacturers learned faster. Even now, I see sometimes in stores some AM3+ motherboard, plain VRM's on it without heatsinks, read that it's supports FX83xx and think "yeah, sure it would work well"..

                Since after Socket 939, all Athlon/Phenom/FX CPU's and majority of APU's have ECC support built-into memory controller. Issue is in motherboards. Mostly only ASUS boards have ECC support added. And AFAIK all unixlike OSes, not to mention Windows can make use of ECC, not only Linux.
                Last edited by aht0; 06 June 2016, 08:59 AM.

                Comment


                • #38
                  Originally posted by aht0 View Post
                  It'd be interesting to see how such processors handle wifi traffic riding all-over their CPU.. So, having dedicated chip on wlan card doing most of the heavy-lifting becomes suddenly logical.
                  The embedded AES coprocessor is there for disk data encryption or other manipulations of local AES stuff (web content probably, or whatever), for wifi uhhh, it's like 2 order of magnitude less raw bandwith to process (disk-file-encryption vs wifi packet encryption) in optimistic conditions (a few meters from the router, no other traffic at all), and even higher than that in real conditions.

                  No, wireless load on PC processors is negligible.

                  I converted old crap monocore Atom boards (no AES nor anything remotely useful) into wifi n routers (actually they were mostly firewalls ALSO doubling as wifi routers in the free time), the thing that chokes them isn't wifi.

                  The shit that kills router processors (and to an extent loads more other processors) is VPN, NAT, serious firewalling programs that inspect packets, and other packet trickery, that isn't handled by the wifi board firmwares anyway. (most router or firewall SoCs have dedicated hardware accelerators for that)

                  Intel's iGPU performance is abysmal. NOBODY buys Intel's CPU's for their iGPU's - more often you meet grumbling that instead this "wasted space" there should be more cache or few more cores. Yeah, iGPU exists, it's convenient for cheap "office" computer. Browse net, use spreadsheets. That's the end of it.

                  About the same for AMD's iGPU's. While they perform better, nobody who needs performance, uses one. He gets dGPU instead. With the 82% probability from nVidia.
                  Please look at the bigger picture please.

                  Integrated GPUs have already destroyed the market for low-profile HTPC GPUs, and even for plain "multimonitor 2D GPUs" that do have some significant traction in the office and signage space (with a cheap office-chipset board and one of these processors http://www.intel.com/content/www/us/...000005556.html you can support 3 monitors without any additional card for example). AMD APUs blow away anything that isn't a midrange gaming gpu.

                  He is saying that the current trend is to leverage the fact that the iGPU is well, integrated, to kick more and more away Nvidia (and AMD in the case of Intel). Of course it will not kick them off the high-end GPUs, but it will probably nibble quite a bit more into the midrange sector in the next years.


                  It doesn't even need synthetic tests. Linux database server crashing on regular basis on big system while Solaris next to it on identical hardware hums away is good indicator enough..
                  ... that there is something seriously seriously wrong with that setup.

                  What about making sure that the admin is not a moron or the software running on it is crap? It's not normal, most systems I know about are solid.

                  Like for example the admin didn't disable the memory overcommitting or tune the OOM. That's a common mistake of noob admins.

                  Out of curiosity. I want real life example for "They are so eager excersizing THEIR stinky rights at cost of everyone's else rights"...
                  He is talking of DRM. You know what DRM is? The stuff that on average fails to protect the software from pirates while being a constant harassment for the honest users?
                  Even if it was working correctly it would still be very harassing on honest users.

                  You are not the Universe. There were mass of people who killed their UEFI motherboards using linux and "rm" command. Because systemd devs thought that having certain variables r/w is bright idea..
                  First things first, Oracle too cites this as an issue for some of their own Sun Servers, without citing specific OSs that cause the issues. So it's not just Linux that encounters the issue. https://docs.oracle.com/cd/E22368_01...356/gpyhs.html

                  Technically speaking, UEFI isn't supposed to be bricked by that, but to simply recover and chug along.
                  UEFI gets screwed by various kinds of things that shouldn't fucking touch it, mostly because most firmwares are low-quality.
                  For example, Running the dreaded commands on my workstation results in a system that cannot store anymore boot entries, but can reflash itself with the integrated utility to fix the issue.
                  Running them on my laptop just wipes boot entries (no duh) but I can set stuff again afterwards, no problem.
                  (no, I'm not a fool, I have spi flashing tools and I was curious)
                  I've seen some BIOS updates to devices from 2014 onwards that cite "fixing permanently deletable EFI vars" or something like that.

                  I've personally had plenty of fun with erased SecureBoot databases. I had no idea how, but on some craptops simply reinstalling windows triggered a delete of secureboot key databases, so the Secureboot locked down the piece of shit until the manufacturer was gracious enough to release a firmware upgrade that allowed me to turn off that crap alltogether (back in win8 days). A year later.



                  Linux's "success" is becoming like Windows. What are you going to do then..? :P
                  No, that's Ubuntu's. Linux at large isn't "Ubuntu".

                  You think memory is weak spot on a AM3+ system with a FX processor? Wrong. It's voltage regulator modules. It took Gigabyte years (and half dozen revisions in some cases for it's motherboards) to get their engineering right so that 125W TDP FX would not go into thermal throttling under heavy load or worse, damage the motherboard. It was especially bad with 125W FX'es. Other manufacturers learned faster. Even now, I see sometimes in stores some AM3+ motherboard, plain VRM's on it without heatsinks, read that it's supports FX83xx and think "yeah, sure it would work well"..
                  1. Gigabyte is a bunch of asshats, their "revisions" where they remove features without changing product code should be made criminal.

                  2. pretty much all boards that were designed before FX processors were unable to handle them proprely, but the OEMs just added the microcode since the socket was the same, just like they did for say AM3+ processors in an AM3 board the past, some boards could not handle them properly and some features (usually some advanced power savings or other stuff that was handled by some pins that on AM3 were unused) were disabled, but they were "supported".

                  Since after Socket 939, all Athlon/Phenom/FX CPU's and majority of APU's have ECC support built-into memory controller. Issue is in motherboards. Mostly only ASUS boards have ECC support added.
                  APUs with FM sockets (sadly) cannot support ECC due to socket, afaik most mobile BGA sockets do (but none gives a flying so no ECC there).

                  While it's true that only ASUS have any semblance of ECC support in mobos, most DDR3 ASUS boards ECC support sucks balls, they added after some community unrest a single slider between "auto<-->disabled".
                  WHAT THE FUCK IS AUTO??? WHAT AM I PAYING FOR YOU %%&£&/&?
                  This is the standard now even on "workstation" and ""server"" boards by other consumer crap vendors, only serious brands like Supermicro or on-the-way-to-be-good brands like Asrock Rack do tell you "hey these ECC RAM banks aren't of my liking so ECC IS NOT ENABLED" on POST.

                  At least on Intel there is a small C program floating around that can dump the appropriate registers and tell if ECC is actually enabled. On AMD afaik there isn't enough info to do so.
                  Slight rant, I know.

                  And AFAIK all unixlike OSes, not to mention Windows can make use of ECC, not only Linux.
                  ECC is handled at hardware level, so the OS on top can be whatever, or none at all.
                  Therefore I'm informing you that he was making a AMD vs Intel comparison.
                  Last edited by starshipeleven; 06 June 2016, 11:58 AM.

                  Comment


                  • #39
                    Originally posted by SystemCrasher View Post
                    I'm fair: I dislike proprietary drivers EVERYWHERE, be it Linux or whatever. I'm not a big fan of proprietary drivers in open systems, because it kills the whole point. If there is significant chunk of important system-level code which is proprietary, it means system is no longer open. If someone is really ok with it, er, there is Windows already, no?
                    I on other hand must be quite odd person. I don't use Linux'es and BSD's alongside with Windows because they are open source, but because they simply work for me better. Don't like your GUI environment? Just do some customising, or choose some other alternative. Want to run system updates when it is convenient to you. Just apply updates later. And so on. Just things you really cannot do with Windows. Change some advanced settings? Even with SystemD scripts it is way easier than some Windows registry fiddling.

                    I would use something like that even if it was closed source. So I don't have any big philosophical problems running Nvidia blob. It works and gets most of my GPU, while other alternative is simply unsatisfactory.

                    Comment


                    • #40
                      Originally posted by TiberiusDuval View Post
                      I on other hand must be quite odd person. I don't use Linux'es and BSD's alongside with Windows because they are open source, but because they simply work for me better. Don't like your GUI environment? Just do some customising, or choose some other alternative. Want to run system updates when it is convenient to you. Just apply updates later. And so on. Just things you really cannot do with Windows. Change some advanced settings? Even with SystemD scripts it is way easier than some Windows registry fiddling.

                      I would use something like that even if it was closed source. So I don't have any big philosophical problems running Nvidia blob. It works and gets most of my GPU, while other alternative is simply unsatisfactory.
                      It must be pointed out that it's the opensource-ness that allows that freedoms and customizations, so it's still connected.

                      Closed-source products tend to be more of a monocolture because of market reasons. For example, users see only the GUI, if they let everyone change dramatically the GUI then it's hard to see it is Windows/OSX/whatever.

                      Or, if they let anyone change and tweak dramatically their kernel, then it becomes a PITA to explain what is the product. Like what happened when MS announced that the Raspi was supported by windows 10 (IoT version). People did not understand that it was a embedded-like OS thing able to run only custom-recompiled apps and the store's.

                      Or, again, have the strong compelling reason to support legacy programs and hamper development of smarter ways of doing things.

                      Comment

                      Working...
                      X