Announcement

Collapse
No announcement yet.

That Open, Upgradeable ARM Dev Board Is Trying To Make A Comeback

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by robclark View Post

    a couple things to point out: 1) snapdragon devices without modem (generally apq80xy... msm89xy are the ones with built-in modem) talk to external modem over some sort of serial link, which the replicant folks (and myself) would describe as "good modem isolation".. and 2) msm7xxx is *ancient*, from an era where the application processor (ie. what runs linux) was bolted on the side, if present at all.
    robclark, you might be interested to know that i was part of the team (including cr2 and pH5) back around 2003-4 on xda-developers who did the original reverse-engineering on the first HTC smartphones, including the blueangel and the universal (the clamshell 3g laptop). as a group we were sufficiently prolific that our work could not be added upstream because the rule was that all device driver files had to be added to the top-level arm subdirectory: we would have tripled the number of files (!) so kept it separate (xanadux on sf.net).

    by the time i owned 9 (!) smartphones i decided i'd had enough. android soon came out and i trusted google to do a decent job. turns out that they screwed everybody over (including one of the key lead developers of android) by using BSD-style licenses... so i just wanted to say *THANK YOU* for having the good sense to implement replicant under the GPL. google has caused enormous damage (thanks to vendor confusion over what licenses are involved in android) through its hypocrisy of implementing android (userspace OS) under the BSD-license... but not reimplementing the linux *kernel* under a BSD license.

    Comment


    • Originally posted by lkcl View Post
      i dealt with that in the laptop housing (aka "dock") by adding in an STM32F072 (which cannot be DRM-locked),
      I know what STM32 is, they're too nice to ignore :P.

      it's powered continuously from the battery, it has RTC functionality and wakeup.... but critically the firmware is entirely LGPLv3+ (libopencm3) and GPLv3+. NO spyware. http://git.rhombus-tech.net/?p=eoma-...s/heads/master
      RTCs in larger systems usually behave similar to STM32. In STM32 terms it is like entering Standby mode, where builtin LDO (simplest integrated "power manager") turns most things off and then RTC (and few other wakeup sources) could start system again. STM32 does downscaled/simple version of the thing. So realistically I wouldn't be scared doing ewakeup by AW+AXP20x. Still STM32 is more predictable, far more flexible and I could imagine STM32 could be useful in many other ways like e.g. analog inputs or smart watchdog capable of very radical deglitching by e.g. toggling main system power.

      jaezuss. *sigh*. well it looks like, from robclark's investigations, that qualcomm are actually making an effort, and he says (above) their catalog parts don't even have modems at all.
      Do they come with non-locked boot? Do they allow mortals into TrustZone? I wonder if they've documented their SecureBoot, etc. After all, maybe their parts without cell modems aren't bad. And ofc opensource GPU driver is advantage, any day.

      *sigh* yeah. i did work on a standard called EOMA200 (still in development) - http://elinux.org/Embedded_Open_Modu...cture/EOMA-200 - but it's a desktop standard (not a portable one), similar to COMExpress. COMExpress is *nice*. none of the interfaces are "optional" which instantly kills a standard stone-dead.
      Strangely enough EOMA-200 admits NAS/microservers. Realistically, these two only need Ethernet, storage, some RAM/CPU and power input. Everything else is really optional for their tasks. Maybe these need their own standard?

      As for desktops I guess largest issue most ARM SoC just do not target desktops. So I wonder which SoCs could actually match this standard easily while coming under reasonable price, etc? Also, I'm not really getting point of doing e.g. PCIe via usb. I guess it performance really bad. Btw, does it requires some special suport on software side? I wonder how PCIe device appears via USB.

      the thing is, once you wander into SATA and GbE territory, pricing goes up massively, power goes up, complexity goes up.
      That's what makes A20 somewhat unique: cheap source of "storage node" & "microserver" things. Realistically these are strongest points of A20 and probably reason why it still widely used and manufactured, unusually long for Allwinner SoCs. Still okay for read-mostly network stuff. Ability to use IIO/GPIO makes it somewhat unique, it bridges several worlds.

      i had to make that call to stick within what i could reasonably achieve on a reasonable budget without *any* VC funding.
      Thanks for that. After all, trying to bring some order into ARM chaos isn't bad idea and IMHO, EOMA-68 makes some sense in "industrial" applications. Though I'm still not sure it is good idea for laptop, especially if it can't easily use "desktop" parts of SoC and no cheap usb 3.x enabled SoCs either.

      we'll bootstrap up to more powerful standards and modules... but i gotta start somewhere, y'know?
      Yeah, I could understand it could be hard. Actually, whole EOMA68 story is amazing.

      Comment


      • Originally posted by SystemCrasher View Post
        I know what STM32 is, they're too nice to ignore :P.
        the 64-pin STM32F072 is like $1.50! and more powerful than an entire computer from the 1980s, mwaaaa

        Strangely enough EOMA-200 admits NAS/microservers. Realistically, these two only need Ethernet, storage, some RAM/CPU and power input. Everything else is really optional for their tasks. Maybe these need their own standard?
        mmm.... maaayybeee... yeah, i see where you're coming from.

        As for desktops I guess largest issue most ARM SoC just do not target desktops.
        pretty much, yeah. it's a market opportunity that's been missed, but which is kiinda taken up by the embedded / engineering boards.

        So I wonder which SoCs could actually match this standard easily while coming under reasonable price, etc?
        the iMX6, which has one single-lane PCIe, and gigabit ethernet. the quad core version is $36 in the west (and a lot less in China)

        the marvell kirkwoods (except they won't damn well give people any - marvell is insane - i mean that literally)

        there _was_ something that ended up in the vortex86 it was a 486 1ghz... IAD100 or something... they gave up because nobody wanted to run windows xp...

        the loongson series of processors are FRICKIN AWESOME and are MIPS64... they're power-hungry monsters even in 28nm but that's ok...

        there's some russian company, balkai, doing something that has PCIe....

        there's a whole obscure selection basically

        Also, I'm not really getting point of doing e.g. PCIe via usb. I guess it performance really bad. Btw, does it requires some special suport on software side? I wonder how PCIe device appears via USB.
        yyeah USB-to-PCIe has only just come out now with the introduction of USB3, it does actually work, they're quite expensive relatively speaking, they do have linux kernel driver support, but... yyyeah, put them onto a USB2 bus and you're down to 480mbit/sec.... mmmm

        That's what makes A20 somewhat unique: cheap source of "storage node" & "microserver" things. Realistically these are strongest points of A20 and probably reason why it still widely used and manufactured, unusually long for Allwinner SoCs. Still okay for read-mostly network stuff. Ability to use IIO/GPIO makes it somewhat unique, it bridges several worlds.
        why do you think i picked it and pushed allwinner so hard for GPL compliance?? irony is that i took SATA and GbE off the standard... o well..

        Thanks for that. After all, trying to bring some order into ARM chaos isn't bad idea and IMHO, EOMA-68 makes some sense in "industrial" applications. Though I'm still not sure it is good idea for laptop, especially if it can't easily use "desktop" parts of SoC and no cheap usb 3.x enabled SoCs either.
        ... it's the power budget that's the main issue (for all the USB3 peripherals) - i mean we'll _do_ it, but it'll be a bigger project, more costly, and a complete redesign of the casework to fit multi-cell batteries and i won't be able to use the low-cost single-cell battery charger IC (bq24193), i'll likely have to go with something like the novena's power board (which includes an STM32F103, yay!). right now i can get away with a bq24193 because the laptop's running on a 15W power budget.

        but we have to get through this phase first - walk before run, so - anyone reading this, help fund this campaign so we can _get_ to that next phase, ok?

        Yeah, I could understand it could be hard. Actually, whole EOMA68 story is amazing.
        thanks
        Last edited by lkcl; 09 July 2016, 12:21 PM.

        Comment


        • Originally posted by SystemCrasher View Post
          Do they come with non-locked boot? Do they allow mortals into TrustZone? I wonder if they've documented their SecureBoot, etc. After all, maybe their parts without cell modems aren't bad. And ofc opensource GPU driver is advantage, any day.
          iirc, it was a combo of boot-mode pins (which on some dev boards I've seen are hooked up to dip switches, or for something more mass produced could be tied high or low) plus e-fuse which controls whether a signed bootloader is required. Honestly I'm not an expert at that stuff, but I think it means that it is up to device manufacturer whether to allow unsigned bootloader/tz/etc..

          on shipping phones/tablets, it is a problem (but just as much a problem as any other SoC).. but if you are designing your own board and want it to be unlocked, then afaiu it should be no problem.


          Comment


          • Originally posted by robclark View Post

            iirc, it was a combo of boot-mode pins (which on some dev boards I've seen are hooked up to dip switches, or for something more mass produced could be tied high or low) plus e-fuse which controls whether a signed bootloader is required. Honestly I'm not an expert at that stuff, but I think it means that it is up to device manufacturer whether to allow unsigned bootloader/tz/etc..

            on shipping phones/tablets, it is a problem (but just as much a problem as any other SoC).. but if you are designing your own board and want it to be unlocked, then afaiu it should be no problem.

            Just in case people didn't know, HTC phones can have the bootloader unlocked officially.


            It's the reason I like HTC so much.

            Comment


            • Originally posted by anda_skoa View Post
              As I've explained in a reply to another comment, it is a common trap to equate hardware accelerated rendering with 3D._
              Realistically, most apps and UI toolkits these days are still just plain 2D. Only few kinds of apps need some kinds of "computed graphics" which would really need GPU-side acceleration. Of all things I'm using I could imagine KiCad and web browser. Both have reasonable 2D fallbacks, btw.

              Furthermore, mobile SoC vendors aren't anyhow eager to implement "desktop" GL. Afrer all, it is slow, difficult to implement and there is nothing to justify this waste of resources. So realistically, those who want full-fledged desktop OpenGL got three things to choose from: AMD, Nvidia, and Intel. Even then, it is not as simple as that. Clueless use of 3D could be extremely slow on some HW and hardly "acceleration". Say, Google Maps are barely doing few FPS on my laptop, etc. They work cool on desktop, but I'm not going to carry my desktop if I'm gone hiking :P. So Google Maps are not very usable for my real-world tasks. Not to mention they are proprietary, do not supply machine-readable MAP DATA, utterly vendorlocked, really useless in offline modes and of no use for anything but Google's hardcoded scenarios. I'm sorry to inform google, but I'm really not going to buy more powerful (and totally backdoored) Intel laptops, etc. I rather grab OSM map data and render/convert 'em the way it works for me.

              Why I equate acceleration to 3D? Because on HW level there is no dedicated HW to accelerate 2D operations, etc. So it inevitably ends up calling 3D hardware and in Linux/Android/... systems it most likely to be GL or GLES. Therefore it is very likely to be 3D/OpenGL, hiding it under some abstraction does not changes that. Those using it have to understand they're going to face all kinds of HW compat & driver woes like gamedevs do. So it only have to be used when really needed, and there're quite few cases when usual app really needs something like this. Btw, GL game development looks like this: 20% of time you write the code, 80% of time you handle all woes, quirks, incomplete features and driver bugs, trying to handle sh*tstorm coming from unhappy users who manage to get all sorts of crap, be it wrong rendering, awful performance or maybe even driver/GPU crash. Not to mention users are blaming devs for absolutely anything. In browsers it goes so bad they even have to bring fairly large GPU/driver combo blacklists.

              Comment


              • Originally posted by SystemCrasher View Post
                Why I equate acceleration to 3D? Because on HW level there is no dedicated HW to accelerate 2D operations, etc.
                that's not quite true - the GC320 is a 2D-acceleration hard macro available on many SoCs (it's from vivante and has been reverse-engineered), the A20 and many other allwinner processors have G2D, which is supported by the xf86-video-driver-fbturbo - parabola just added it to the repository after i tested it out from source code, so the Libre Tea Computer Card will go out with 2D accelerated X11 by default.

                it can be done...

                Comment


                • Originally posted by SystemCrasher View Post
                  Realistically, most apps and UI toolkits these days are still just plain 2D. Only few kinds of apps need some kinds of "computed graphics" which would really need GPU-side acceleration. Of all things I'm using I could imagine KiCad and web browser. Both have reasonable 2D fallbacks, btw.

                  Furthermore, mobile SoC vendors aren't anyhow eager to implement "desktop" GL. Afrer all, it is slow, difficult to implement and there is nothing to justify this waste of resources. So realistically, those who want full-fledged desktop OpenGL got three things to choose from: AMD, Nvidia, and Intel. Even then, it is not as simple as that. Clueless use of 3D could be extremely slow on some HW and hardly "acceleration". Say, Google Maps are barely doing few FPS on my laptop, etc. They work cool on desktop, but I'm not going to carry my desktop if I'm gone hiking :P. So Google Maps are not very usable for my real-world tasks. Not to mention they are proprietary, do not supply machine-readable MAP DATA, utterly vendorlocked, really useless in offline modes and of no use for anything but Google's hardcoded scenarios. I'm sorry to inform google, but I'm really not going to buy more powerful (and totally backdoored) Intel laptops, etc. I rather grab OSM map data and render/convert 'em the way it works for me.

                  Why I equate acceleration to 3D? Because on HW level there is no dedicated HW to accelerate 2D operations, etc. So it inevitably ends up calling 3D hardware and in Linux/Android/... systems it most likely to be GL or GLES. Therefore it is very likely to be 3D/OpenGL, hiding it under some abstraction does not changes that. Those using it have to understand they're going to face all kinds of HW compat & driver woes like gamedevs do. So it only have to be used when really needed, and there're quite few cases when usual app really needs something like this. Btw, GL game development looks like this: 20% of time you write the code, 80% of time you handle all woes, quirks, incomplete features and driver bugs, trying to handle sh*tstorm coming from unhappy users who manage to get all sorts of crap, be it wrong rendering, awful performance or maybe even driver/GPU crash. Not to mention users are blaming devs for absolutely anything. In browsers it goes so bad they even have to bring fairly large GPU/driver combo blacklists.
                  After reading your post several times, I'm now convinced that your issue is not with 3d graphics, it is with Intel's horrible performance. And you are completely oblivious to it and blame it on anything else. That's why Intel is vastly responsible for killing PC gaming, and most people are clueless about it.

                  Comment


                  • Originally posted by duby229 View Post
                    After reading your post several times, I'm now convinced that your issue is not with 3d graphics, it is with Intel's horrible performance. And you are completely oblivious to it and blame it on anything else. That's why Intel is vastly responsible for killing PC gaming, and most people are clueless about it.
                    Come on, quit this shit already.

                    Comment


                    • Originally posted by pal666 View Post
                      your brain is just too weak to understand such simple things.
                      My brain isn't strongest on the planet or even in this thread. But you're not in position to yell about it, I think.

                      arbitrary certification is meaningless,
                      RYF certification is IMHO useful, since it acts as badge declaring device is not going to expose unwanted behavior and there is no "hard" vendor lock attached. I think it is good when device comes with no strings attached and 3rd party verified it. Third-party arbiter is needed, because MFRs are inherently prone to marketing BS kind of thing. Just look on our disagreement about A20 vs RYF. I think FSF isn't worst arbiter when it comes to liberty. OTOH I have no reasons to trust weird judgement of some consumer.

                      while open drivers allow device to be used with software of my choice and in general are technically better.
                      non-updateable driver is not opensource, you seem to be confused.
                      Availability of souce is one thing. Ability to change code in particular system is another.

                      If you've failed to learn the story behing GPLv3: under GPLv2 it is perfectly legal to TiVoIzE device. So you could get source, change it, but who told device is obliged to run it? GPLv2 failed to foresee this kind of treachery. So if device rejects your code ... its ok under GPLv2. BSD/Apache are "permissive" so they do not care. Good luck to udpade drivers. Btw, most Qualcomm devices around are good at this kind of crap. Major privacy invasion is a bonus (crap comes in large packages).

                      if system is owned via huge bugged hardware, who is to blame? if system is owned via huge bugged non-replaceable firmware in rom, who is to blame? from software pov there is no difference.
                      So it is about getting crap-free hardware without fatal bugs. Especially in large complicated firmwares. Especially if they are replaceable but artificial digital locks placed to ensure only vendor could truly control device and user is merely a guest who have to obey true owner of the device. And, uhm, I doubt I would ever see YOU fusing YOUR key. So if it "works for you" I'm better to take your words about liberty and openness with truckload of salt and FSF isn't worst arbiter I would listen to, unlike you.

                      Whatever, in secure-boot enabled devices the only real owner is one who helds eFused key. Everyone is barely guest in their system. They could let you in. Or show you a middle finger. At their own discretion. So it works for you. Unless you try to do something beyond their permissions, and most of time replacing kernel is not allowed for obvious reason.

                      Comment

                      Working...
                      X