Announcement

Collapse
No announcement yet.

Libre RISC-V Snags $50k EUR Grant To Work On Its RISC-V 3D GPU Chip

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by oiaohm View Post
    Mathematically sound goes down in power analysis, em and so on. Same things that leak information also can allow outside interference.
    Then we disagree in terms used, as a studied mathematician I don't want to be involved in anything practical and related to the real world =)

    Comment


    • #22
      Originally posted by lkcl View Post
      i have been talking with jean-paul from LIP6.fr (alliance / coriolis2) - and the answer would appear to be, amazingly, "not a lot". the reason is because the design layout completely scales linearly.

      so as long as you can still get 7nm "cells" (as they are called) that fit exactly with the 28nm version that you did, you have pretty much zero layout changes needed.
      I did write 14nm or 7nm for a reason. You were asked straight down to 7nm.

      14nm has thick enough tracks to basically work fairly well with 28nm voltage and amps. I suspect this is part of the reason why AMD is doing their chiplet model/SiP the way they are. 14 nm to provide the power feeds out of the system in package (SiP) and the 7nm and smaller for the high performance parts inside the SiP.

      Basically there there is a problem when you go under 14nm and try to run external memory controllers and the like with power handling. Yes basically not having enough silicon to handle power switching. Same thing with high voltage mosfets using like 120nm process or equal.

      Scaling a design down to 7nm is simple having it work with only 7nm parts that is another problem and maybe impossible. Performance you will want you high performance parts done in as small as nm as possible. Outside signal parts seam to hit wall at 10-14nm.

      We are basically starting to have different silicon production walls come up where different sections of the design can only go to X nm and no smaller. Yes this is happening well before the absolute production wall.

      Comment


      • #23
        Originally posted by discordian View Post
        Then we disagree in terms used, as a studied mathematician I don't want to be involved in anything practical and related to the real world =)
        Mathematically sound from applied security is broader than most who have done course work in mathematics think. So you are alone thinking that doing maths is not anything practical and related to the real world.

        Mathematically Verified Silicon you start seeing coming out of CSIRO data61 related projects. This is where the design is being put through full formal mathematical proofs of function.

        Sorry the separating between between practical and your maths is basically gone in the security side. You say something is secure you need to put up a mathematical proof covering all the possible problems proving they don't exist. CSIRO and other employ a lot of mathematicians to in fact do these insanely complex proof systems.

        Yes of course Physics people lay out the basic structures that the mathematician has to base proof around.

        Comment


        • #24
          Originally posted by kpedersen View Post
          The original article mentioned "for mobile devices". Sod that, if this project is successful, I feel it could be a fantastic asset for all types of devices, even those that are not locked down pieces of consumer / gamer shite!

          Sure, there will be some backlash and bad press saying that "it isn't as fast as a Geforce from 5 years ago" but I would still exclusively buy libre GPUs based on this technology and never look back to the dark old days ever again!
          This GPU (or rather the "Vulkan accelerator") does not even have the performance of NVIDIA graphics chips from 15 years ago.
          Please just look at the spec:
          - 720p@25FPS - A lot of smartphones provide HiDPI display. I believe that something around 1080p is currently the bare minimum for modern smartphones. And 25 FPS also does not impress.
          - 100 Mpixels/sec - The same result has been achieved by NVIDIA Riva 128 from 1997...
          - 30 Mtriangles/sec - These results are comparable to those of the first Xbox (2001). It used NV2A, a derivative of the GeForce 3. According to the spec sheet, "the RSX in the PS3 (2006) has something like 250 million triangles per second". Anyway, "today pipelines are too complex to be measured by such unit of measurement".
          - 5-6 GFLOPs - This performance is comparable to the PlayStation 2 (2000) with 147 MHz GraphicsSynthesizer. Adreno 640 has 898.56 GFLOPS in FP16 and 449.28 GFLOPS in FP32! Maybe 10 years ago 5-6 GFLOPs wouldn't look so bad on the smartphone market, but today it only makes people laugh.
          As you can see, it is definitely not suitable for any kind of modern smartphone, even a low-end (~100 USD). And it is not even finished and won't be before 2020!

          Maybe this performance would be enough for cheap smartwatches. However, power consumption (2.5 W) is definitely too high. The 38mm variant of the Apple Watch has a 3.8 V 0.78 W·h (205 mA·h) battery which is able to power the device for many hours! For the same reason, this RISC-V solution is unsuitable for digital photo frames, weather forecast stations or similar devices.

          RISC-V smartwatch
          That's what a modern RISC-V based smartwatch could look like. 😉



          RISC-V battery cells
          Additional battery cells to power RISC-V based mobile devices. 😉



          We have one more problem here - there is no mature software mobile platform on RISC-V at all. All current and planed Linux solution are tied to x86 and ARM CPUs. This includes Tizen, Sailfish OS, Ubuntu Touch, KaiOS and PureOS. What is worse, port to RISC-V wouldn't be so easy. For example, PureOS Store is supposed to be based around flatpaks. However, the Freedesktop runtime (as well as its derivatives: GNOME and KDE) doesn't support RISC-V. Is is available only on ARM (ARMv7 and AArach64) and x86 (x86-32 and x86-64).

          I am not saying that this RISC-V solution is completely useless. I believe that there are some applications where it could fit. In my option, this initiative has more sense than OGP (Open Graphics Project). However, do not expect a mass adaptation in commercial devices. It just won't happen.
          Last edited by the_scx; 05 June 2019, 02:45 PM.

          Comment


          • #25
            Originally posted by uid313 View Post
            Besides primitive operations, will it support hardware-accelerated cryptography and encoding? Such as AES-256 or AV1 decoding?
            Besides a basic instruction set, will it support advanced instructions like those in SSE4, FMA and AVX-512?
            Will it support virtualization?
            That's a GPU. All the stuff you list are either hardware accelerators (independent from the GPU cores) or CPU instructions that make no sense on a GPU.

            Comment


            • #26
              Originally posted by starshipeleven View Post
              That's a GPU. All the stuff you list are either hardware accelerators (independent from the GPU cores) or CPU instructions that make no sense on a GPU.
              Okay, well in that case; will it support HDMI, DisplayPort, OpenGL 4.6, OpenCL, ASTC, ETC2, VESA Adaptive-Sync, tesselation, raytracing?

              Comment


              • #27
                Originally posted by lkcl View Post
                oh? you're aware that several people have lied both publicly and privately about conversations that they've had with me, causing the EOMA68 project to be set back by at least three years, due to the harm that they caused? those people are directly responsible for the ongoing environmental damage that EOMA68 *would* have reduced if it had been possible to complete earlier, because more volunteers would have helped out to get it to a crucial threshold point.
                Lol, EOMA68 project was "set back at least three years" because "several people lied about conversations they had" so "volunteers didn't materialize".

                That's why quite a bit of people are skeptical about you.

                EOMA68 is a completely dumb design that has 0 growth potential because your interface is too little even for basic connectivity.
                That's why volunteers didn't appear. Everyone that actually has any idea about wtf is going on are NOT going to be on board with that crap.

                You are just good at convincing non-technology-savyy people to give you money.

                Comment


                • #28
                  Originally posted by uid313 View Post
                  Okay, well in that case; will it support HDMI, DisplayPort, OpenGL 4.6, OpenCL, ASTC, ETC2, VESA Adaptive-Sync, tesselation, raytracing?
                  Why you can't fucking read the article?

                  using a RISC-V chip running a Rust-written Vulkan software renderer (similar to what LLVMpipe is to OpenGL on CPUs) for providing libre 3D graphics.

                  This is basically a multicore RISC-V CPU with hopefully (but probably not) some minor instruction additions to make it a bit more GPU-like, doing software rendering of Vulkan, that disguises itself as a GPU for a bigger system.

                  None of what you said is supported or will be.

                  Comment


                  • #29
                    Originally posted by starshipeleven View Post

                    You are just good at convincing non-technology-savyy people to give you money.
                    ah, starshipeleven, i remember you now, from some other posts a couple of years back. feel free to believe what is most useful to you.

                    Comment


                    • #30
                      Originally posted by lkcl View Post
                      ah, starshipeleven, i remember you now, from some other posts a couple of years back. feel free to believe what is most useful to you.
                      I take great pride in being "directly responsible for the ongoing environmental damage that EOMA68 *would* have reduced if it had been possible to complete earlier".

                      Last edited by starshipeleven; 05 June 2019, 08:28 AM.

                      Comment

                      Working...
                      X