freespirit
Yes, it must be the Kingston DataTraveler pendrive. It says USB 3.1 but doesn't claim any speed. In fact it disappeared and reappeared while partitioning it, maybe it's just losely plugged.
RAM sticks are Kingston KVR24R17D4K4/64. But I haven't tested them yet, just booted with them.
curaga
Thanks, I fogot about memtest=20 or the number of passes, as kernel parameter. I guess that's better than memtester, since it'll test more RAM.
Announcement
Collapse
No announcement yet.
It Looks Like Raptor Is Gearing Up To Release A New Open-Source POWER System
Collapse
X
-
Originally posted by madscientist159 View Post
Just wanted to mention that there is no microcode "blob" in the POWER9. POWER's microcode is far closer to ARM microcode than x86 microcode (see http://www.righto.com/2016/02/revers...rocessors.html for an insightful look into the old ARM1 "microcode"). If there was a blob, we'd be pushing to get source for it released, but as it stands any microcode in the POWER9 is baked into the HDL for the processor, and from there into the silicon itself alongside the other logic blocks. POWER microcode is not as simple as a blob stored in ROM, it's connected to the logic in the processor in more ways than that.Originally posted by curaga View PostI'm on madscientist's (and FSF's) side on this. Loadable binary microcode is slightly more insecure than baked. I believe it is a lower bar for an insider to edit the microcode and the posted checksums than it is for an insider to edit HDL before the manufacture of this specific cpu. @phoron: There's several that run under Linux, even one in the kernel itself - they can test most of the RAM, sans the parts taken by the kernel or for the userspace case, otherwise taken.
trying to make the things more clear i have some questions
on power9:
a) the HDL is a part of processor where this microcode is installed or is the microcode itself?
b) the microcode run OS and programs code; internal hardware instructions or both?
c) as i understood x86 microcode contain hardware firmware while power9 microcode don't is it right?
now let's assume IBM, Intel, AMD and ARM are evil and they put a backdoor inside their microcode, no one could inspect it because microcode are closed, what difference, should we have from user privacy and security prospective? the microcode differences do matter anymore or is game over for everyone? if it's game over, in my opinion IBM should open this part too, i know it could sound irrealistic but after Snowden leaks nothing will surprise me anymore
Leave a comment:
-
I'm on madscientist's (and FSF's) side on this. Loadable binary microcode is slightly more insecure than baked. I believe it is a lower bar for an insider to edit the microcode and the posted checksums than it is for an insider to edit HDL before the manufacture of this specific cpu. @phoron: There's several that run under Linux, even one in the kernel itself - they can test most of the RAM, sans the parts taken by the kernel or for the userspace case, otherwise taken.
Leave a comment:
-
Originally posted by freespirit View PostI'm not an expert so my idea is purely from a normal user prospecrive who care about privacy and security, so if i wrote something wrong apologise for that
In my opinion any kind of code cannot be ispected, is a potential backdoor/malware, just because only who wrote it know what this stuff do, if my understanding is right my vision is closer to the amd engineer, i think IBM should be more open on this microcode even if is not modificable, because the good things of open source code, is everyone could inspect it.
On power9 vs x86 the difference is power9 do have just this microcode as a blackbox while x86 do have alot of stuff, so in my opinion is it better, but i'd like to read from raptorcs a comment like we are working with IBM to free or release this code too, because at the end it's just a matter of trust.
I trust open source because alot of people can review the code and found if something malicious is inside, in a perfect world a whoudn't care about open or closed, because even a full closed system who respect my privacy and security is fine to me if i'm 100% sure i can trust it. In the world we are living we can't even trust to our borthers, so why should i trust someone else?
Offcourse there is a kind of limit, when i will buy the next raptorcs board i have to trust them, because i'm not surelly able to understand what's inside, i hope to be able to install everything without make a mess but knowing all the code my machine is running is open and can be inspected, will give the peace of mind because someone else will look that code, even a competitor just to say "haha you are an evil, that's the proof my product is better"Last edited by madscientist159; 02 September 2018, 04:07 AM.
Leave a comment:
-
I'm not an expert so my idea is purely from a normal user prospecrive who care about privacy and security, so if i wrote something wrong apologise for that
In my opinion any kind of code cannot be ispected, is a potential backdoor/malware, just because only who wrote it know what this stuff do, if my understanding is right my vision is closer to the amd engineer, i think IBM should be more open on this microcode even if is not modificable, because the good things of open source code, is everyone could inspect it.
On power9 vs x86 the difference is power9 do have just this microcode as a blackbox while x86 do have alot of stuff, so in my opinion is it better, but i'd like to read from raptorcs a comment like we are working with IBM to free or release this code too, because at the end it's just a matter of trust.
I trust open source because alot of people can review the code and found if something malicious is inside, in a perfect world a whoudn't care about open or closed, because even a full closed system who respect my privacy and security is fine to me if i'm 100% sure i can trust it. In the world we are living we can't even trust to our borthers, so why should i trust someone else?
Offcourse there is a kind of limit, when i will buy the next raptorcs board i have to trust them, because i'm not surelly able to understand what's inside, i hope to be able to install everything without make a mess but knowing all the code my machine is running is open and can be inspected, will give the peace of mind because someone else will look that code, even a competitor just to say "haha you are an evil, that's the proof my product is better"
phoron
i think it depend on a pendrive, some pendrive are really fast, others are like an hell even to copy doc files
could you please write your ram model?
Leave a comment:
-
Well said, madscientist159. And bridgman, I disagree but enjoy reading your content rich polite discussion.
Another angle at it is that besides problem scenarios, risk analysis and definitions (which is all sensible and necessary), you can simply look at requirements.
In my book DRM is antonym with owner control. If a system is designed with DRM support as a requirement, it just can't claim owner control, it would be a bug in DRM.
But yes, I imagine for systems without DRM as a requirement, you can still discuss whether binary updatable firmware is tolerable. I think it is not, and when it is not in order to support DRM, I see little sense even for the hardware vendor not to open it and outsource maintenance to the community. But industries have inertia, of course...
Nice to hear of the new system. I'll be busy in October, but I hope someone spams me the details p . I don't think I'll but any for me, but I might maybe recommend it to someone.
@freespirit:
Right now I'm Installing debian with a buster alpha3 netinst to a Talos II with cheapest configuration, except RAM. The RAM is not in the HCL (but close to some there). I wonder what tests should be run to add the RAM to the HCL (or other hardware in general). Should I use memtester ? Is there some option in uboot, petitboot or something to properly test RDIMMs ?
The installing is slow, but then I don't have disks yet and I'm installing to an encrypted pendrive, so I guess that's why. I guess I could have used a live CD too if I had found one. I want to test the RAM before I decide and get the disks because I can still return it if I hurry.
Leave a comment:
-
Most of the security community we work with would disagree (and have disagreed); they would say mutable black box software makes the system unverifiable and therefore untrustworthy. This may indicate a wider split within the security community, though I'll just say that some fairly high profile entities are coming around to our side of the debate. By removing the right to modify, owner control aside, you make the owner wholly dependent on the vendor for any potential updates, and make it impossible to fix issues locally that the vendor does not consider to be a bug.
There are multiple other real world problems with mutable black box firmware. Legally, it removes consumer rights and remedies when bugs are found, since software is treated differently than hardware, and any damages are restricted in most cases. In the same vein, that firmware is protected against user modification by copyright in a way that a physical item is never subject to -- this had to be handled with a specific jailbreaking exemption by the copyright office after widespread disregard of the law, but that exemption only applies to phones and remains an uphill battle to use at all.
Putting the microcode into a non-updateable ROM forces the vendor to provide at least some basic assurance the product will work as intended (specifically, as communicated to to the buyer at time of sale). Or is AMD willing to refund and pay damages in the case that the machine owner doesn't agree to some of the modifications included in an ostensibly bugfix microcode update? If so that would probably be an industry first.
No matter how you look at it, this model seems to be the vendor wanting its cake and eat it too -- they want to avoid expensive CPU recalls due to bugs, but also don't want the owner taking advantage of the update facility for their own purposes. I have no idea how that can be called owner controlled when embedded right in that statement is restricted, vendor-only access to part of the CPU. I'd rather have the vendor take the time to make sure the baked in microcode was bug-free then to have the potential of a future bugfix update also removing functionality with no legal recourse.
Finally, open source loading is irrelevant. I can use open source code right now to install malware if I want, it's the exact nature of the uploaded binary that matters. I note with dismay that full changelogs for microcode are still not available; this does not inspire confidence and reinforces security by obscurity allegations.
EDIT: As a closing thought, the interesting discussion here doesn't have much bearing on POWER CPUs since their microcode isn't ROM based, it's more like ARM where it's hard wired as silicon. Had some interesting things come up during development because of this 😉.
What do you all think out there? 😁 Which level of access do you want to see in modern hardware?Last edited by madscientist159; 02 September 2018, 01:52 AM.
- Likes 2
Leave a comment:
-
Originally posted by madscientist159 View PostNot sure that "looser" is the right word, but yes, it does seem we have a slightly different interpretation of "owner controlled". Our definition is very simple: if it's mutable in any way after manufacture (therefore invoking copyright law and EULAs instead of implied warranties of merchantability, etc.) then the owner must be able to fully modify that component as desired.
Originally posted by madscientist159 View PostMaybe an analogy would help: if I have a lock, let's call it Lock A, on a safe that is hard-manufactured to allow a specific, unique key and no others (also presuming copies of said keys are not taken), I can be reasonably assured that a.) the lock cannot be changed to allow additional access without substantially replacing it and b.) that any defect allowing others access would be something the vendor would be held responsible for. I have another lock, Lock B, that has a special port by which a locksmith with a special key can rekey my lock (ostensibly with my permission), but I, as the lock owner, cannot gain access to this rekeying mechanism. The vendor can now disclaim liability for third party access to "my" lock, and I have no way to use this feature to my advantage at all because it only recognizes the special locksmith key to access the rekeying mechanism. Did I mention that the locksmith is always too busy to rekey my lock the way I want it, and/or wants to charge a fee that is several times the value of the contents of the safe?
- the locking mechanism on lock B is controlled by a binary file loaded into the lock
- the binary file is loaded by open source code that you can inspect and modify
- the contents of the binary file can be verified (via CRC or direct binary comparison) by open source code that you can inspect and modify
... then you have confidence that the lock can not be changed to allow additional access without your permission. Unless you can inspect the internals of lock A you have no reason to believe that lock B is any less safe than lock A since both could be hiding surprises.
That said, the lock on my gun room is an S&G mechanical (lock A) and yes I did inspect it
Where our views differ, I think, is your implication that if a feature exists (potential for loading different binary data into the locking mechanism) then you must be given a way to use that feature to your advantage and have full visibility into its workings. I regard that as independent from and orthogonal to being able to confirm that the lock has not been changed; while both are desirable the absence of one does not diminish the value of having the other... in my opinion of course.
If the lock vendor supplies you with a new binary file to change the lock's behaviour as you requested you can reasonably worry about whether additional access was also slipped in... but you have equal reason to worry about whether additional access was slipped into the original file supplied by the vendor.
Now let's have the vendor store that original binary data inside the lock's micro-controller via ROM data included in the fabrication mask - nothing changes and you have the same reason to worry about whether additional access was slipped in by the vendor as you do if the file was verified and loaded by open source driver code.Last edited by bridgman; 02 September 2018, 12:41 AM.
Leave a comment:
-
Originally posted by bridgman View Post
Ahh, OK... looks like your definition of "owner controlled" is looser than mine.
We might have to replace "owner controlled" with something that uses more words
As a hardware designer I view ROM'ed microcode as being no different from microcode loaded into RAM by an open source driver, since the owner has equal visibility (zero) and ability to prevent changes (absolute) in the two cases.
Maybe an analogy would help: if I have a lock, let's call it Lock A, on a safe that is hard-manufactured to allow a specific, unique key and no others (also presuming copies of said keys are not taken), I can be reasonably assured that a.) the lock cannot be changed to allow additional access without substantially replacing it and b.) that any defect allowing others access would be something the vendor would be held responsible for. I have another lock, Lock B, that has a special port by which a locksmith with a special key can rekey my lock (ostensibly with my permission), but I, as the lock owner, cannot gain access to this rekeying mechanism. The vendor can now disclaim liability for third party access to "my" lock, and I have no way to use this feature to my advantage at all because it only recognizes the special locksmith key to access the rekeying mechanism. Did I mention that the locksmith is always too busy to rekey my lock the way I want it, and/or wants to charge a fee that is several times the value of the contents of the safe?
One can easily see that Lock B is only superior if the key to the special port is handed to the lock owner when bought. Ideally with instructions on how to use the rekeying systems as well.
EDIT: Fundamentally, from a legal perspective, a designed, baked-in component is treated differently than the same component distributed as software. The recent Intel microcode fiasco nicely highlighted just one of the possible abuses of this distinction.Last edited by madscientist159; 01 September 2018, 10:23 PM.
Leave a comment:
-
Originally posted by madscientist159 View PostThat's a very unusual stance to take. Whatever microcode (to be clear, horizontal microcode and similar items, not horizontal microcode + executable code like the new normal in the x86 world) is inside an IBM CPU is baked into the silicon and, crucially, cannot be changed by anyone post-manufacture. Every mutable component on a POWER CPU is owner controlled, whereas most mutable components on an AMD CPU are AMD / vendor controlled.
We might have to replace "owner controlled" with something that uses more words
As a hardware designer I view ROM'ed microcode as being no different from microcode loaded into RAM by an open source driver, since the owner has equal visibility (zero) and ability to prevent changes (absolute) in the two cases.
Leave a comment:
Leave a comment: