Announcement

Collapse
No announcement yet.

Libre RISC-V Snags $50k EUR Grant To Work On Its RISC-V 3D GPU Chip

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by lkcl View Post

    ok. so it's a little more complex than that. a standard RISC-V core, compliant with all official extensions (including RVV Vectorisation) is just simply not capable of the required performance to meet GPU workloads, at least not in a reasonable power budget.
    [...]
    so, "the idea is to create a software that transforms a RISC-V in a GPU capable of running Vulkan" - not quite: the idea is to write a Vulkan driver (actually mostly a Shader Compiler which compiles SPIRV into LLVM IR) *and* develop the hardware and the minimum custom accelerated opcodes and infrastructure needed to get us reasonable 3D performance *on the same CPU*. that CPU will happen to be based around a RISC-V compatible instruction set.
    Thanks for the explanation. Now it is more clear.

    Comment


    • #42
      Originally posted by starshipeleven View Post
      I enjoy justice. People like you make the whole opensource and open hardware community look bad.
      ah. the sadistic "righteous crusader" type. the one who believes it is their absolute right and mission to speak loudly and widely to cause other people to fail "for the good of others". there's a simple solution: if you can do better, or if you know better, then instead of being a sadistic bully, why don't you take over the project? or can you find people who can do better, and we'll direct the funds from the NLNet Foundation at them, when they complete each of the Milestones, how about that?

      Comment


      • #43
        Originally posted by the_scx View Post

        This GPU (or rather the "Vulkan accelerator") does not even have the performance of NVIDIA graphics chips from 15 years ago.
        Please just look at the spec:
        If it is faster than xf86-video-vesa and is able to give me a resolution native to my monitor then I do not see an issue whatsoever.
        Not to mention a Geforce 6 from ~2004 is easily fast enough for users that don't waste resources on crapware like Gnome 3.

        Comment


        • #44
          Originally posted by lkcl View Post
          the one who believes it is their absolute right and mission to speak loudly and widely to cause other people to fail "for the good of others".
          I find amusing how you still believe your failure is caused by others trash-talking the project, and not because of its intrinsic merits (or lacks thereof).

          why don't you take over the project?
          Same as EOMA68, the end product does not justify the effort.

          Comment


          • #45
            Originally posted by microcode View Post
            Cool, good start for a third-party project. Esperanto Technologies prototyped a RISC-V based shader and they were able to get that working pretty quick. To make a competitive RISC-V shader core would take a sizable instruction set extension, but I wonder how big the extension would really need to be to make it just more useful than software rendering.
            that's what we're going to find out, pretty soon. Mitch Alsup (the designer of the 68000 and AMD's Opteron series of CPUs) was a consultant on the recent Samsung GPU, he told me this morning that he designed their Texturisation ISA opcodes. i think he said about 20 instructions were needed, and around 5 different texture "types", and he offered to help us out here as well. he's been... i cannot believe we are so lucky to have his help answering questions on fundamental architecture design.

            much of what we're doing is constrained - or more like "defined" by the Vulkan API. in that sense, it's a pretty straightforward "by the numbers" (haha) process. honestly though my primary focus as of right now is the Out-of-Order multi-issue Execution Engine that will form the basis of the Vector Processing to SIMD transparent conversion.

            for the Vectorisation, just purely the FP32 side, we can get away with just a mere 2 FP32 ops per core per clock, to meet that 6 GFLOPs target. i suspect that what we will actually go with (unless it completely blows away the power figures) will be more like 4x FP32 ops per core per clock, giving double the numbers (12 FP32 GFLOPs). due to the way that the engine is designed, that would also give twice the FP16 numbers (25 FP16 GFLOPs).

            Jacob is the one with the deep experience in 3D: as you know he wrote a 3D engine a couple years back, under GSoC. this Vulkan driver came out of that effort, and between us (and with Mitch's input) we are working out what can remain in software and what has to be done as custom opcodes.

            summary: i don't honestly know the full answer: we'll find out, for sure, over the next few months, and i'm sure that there will be followup phoronix articles about what we end up discovering and learning

            that, for me, is half of what this project is about: learning and then documenting and communicating to others what we discover, so that other people can not only learn, they can take it forward another step further. in that, i feel that we've already succeeded because the code, the documentation and all of the updates are there for anyone and everyone to see.

            this transparency is particularly what NLNet were really happy with. they're keenly aware that software privacy only works if the hardware's fundamentally trustworthy. we don't *want* to be the ones that people come to to ask "can we trust this hardware", we'll tell them "don't ask us - no really, DON'T! find out for yourself! find a 3rd party independent auditor, run the formal verification suite for yourself, but definitely don't ask *us* if the hardware can be trusted!". i find this to be very funny

            Comment


            • #46
              Originally posted by kpedersen View Post
              If it is faster than xf86-video-vesa and is able to give me a resolution native to my monitor then I do not see an issue whatsoever.
              Not to mention a Geforce 6 from ~2004 is easily fast enough for users that don't waste resources on crapware like Gnome 3.
              Screen res is a very big issue.
              1280 x 720 25 fps is derpy on a laptop screen and just bad on a monitor.

              A Geffo 6 from 2004 has max res 2048x1536@85 Hz

              This thing will be pushing like 921'600 pixels vs 3'145'728 of the Geffo 6, that's around 3.4 times less pixels, and around 3 times less framerate.

              It won't be able to run even old monitors no matter how hard you push it. Not the 4k ones, the FullHD ones.

              Comment


              • #47
                Originally posted by starshipeleven View Post
                Screen res is a very big issue.
                1280 x 720 25 fps is derpy on a laptop screen and just bad on a monitor.
                And yet people don't mind that NVIDIA has a features which renders the video at a decreased resolution, anti-aliases the crap out of it, in order to get better framerates so RTX is actually usable.

                We're talking about the mobile market. You're good at trolling on the internet but not in critical thinking.

                Comment


                • #48
                  Originally posted by lkcl View Post
                  it has to be pointed out that if you're comparing against desktop GPUs, you're just plainly not familiar with the embedded GPU market (and haven't read what i wrote only a couple of comments ago).
                  You didn't read carefully. I did this comparison only because someone else has compared this Vulkan accelerator to the GeForce GPU from 5 years ago. We have people here who believe that this Vulkan accelerator can successfully replace desktop and mobile GPUs in the near future. Unfortunately, it is extremely unlikely.
                  Anyway, I believe that this Vulkan accelerator is comparable to the mobile GPUs from 10 years ago (that's why I said that "maybe 10 years ago 5-6 GFLOPs wouldn't look so bad on the smartphone market") and desktop GPUs from 20 years ago. To be honest, it is very hard to compare the current GPUs to the chips from 15-25 years old. It is because graphics processing units have undergone tremendous evolution, from fixed function GPUs, through programmable GPUs, to Unified Shader Processors. That's why it is extremely hard to compare i.e. Riva TNT2 or GeForce2 Ultra with the modern GPUs that support Vulkan with SPIR-V shaders.

                  Originally posted by lkcl View Post
                  these are the numbers from Vivante GC800, which costs USD $250,000 to license, and has staggeringly good power efficiency.
                  Vivante GC800 is a GPU from 2011-2012 that was made in the 65nm process. It was designed for low-cost MIPS and ARMv7 solutions. Around 2012, it was hugely suppressed by the Mali-400 GPU family. It was so bad that Actions Semiconductor replaced it by PowerVR Series5 (SGX), i.e. SGX540 from November 2007. It wasn't better, but it was at least worth the price.

                  Originally posted by lkcl View Post
                  Adreno has a decade of incremental development behind it, and the financial resources of a billion-dollar company behind it (Qualcomm) that allow them to make 5 MILLION dollar iterative mistakes at a time, on failed tape-outs.
                  Believe me, I understood that money is an issue here. A few days ago we were wondering where the Chinese would get the money to finance their hypothetical RISC-V processors.
                  Originally posted by the_scx View Post
                  Arm Holdings has money to develop ISA and cores because they sell licenses. Qualcomm and MediaTek have money, because they sells a lot of chips. Where do the Chinese get the money to develop their processors? I mean, state funds aren't infinite and currently RISC-V CPUs aren't even competitive with 0.5-2 USD Cortex-M chips. So, where would you sell them? There are not many things where these CPUs would be suitable. Maybe some kind of digital photo frames, weather forecast stations or something like that? Sure, it should be possible to put these chips in some kind of home appliances (e.g. fridges or microwave ovens) and consumer electronics (like DVRs or Blu-ray disc players), but I bet that it is cheaper to just buy ARM chips than to port the Linux IoT platforms to RISC-V.
                  Maybe with multibillion investments each year, they would be able to achieve something in 10 years, but this is very unlikely to happen in my opinion.
                  And we obviously know that your budget is much smaller.

                  A lot of Linux mobile platforms have failed due to both financial and technical reasons:
                  - Sailfish OS: After multiple issues with production of Jolla Tablet, financing and schedule a small number of devices were shipped in autumn 2015, while the company stated in December 2015 that due to the non-availability of necessary components, that production was to be stopped and the remaining backers should receive a full refund. As of 23 December 2016, Jolla had still not provided a timeline to fully refund backers, stating "we will commence with that [refund] as our financial situation allows." (Wikipedia)
                  - Ubuntu Touch: The Ubuntu Edge fell short of its funding goal, raising only $12,809,906, with 5682 pledges to purchase the standard model of the handset. (Wikipedia)
                  It is really hard to believe that someone will put a lot of money, just like that, for almost hobbyist project, that has no economic sense.

                  Originally posted by lkcl View Post
                  you missed it, twice, i will say it again: we are *deliberately* going for a lower performance (with a lower power budget)
                  I missed it?! No, I am definitely not the one who believes that this Vulkan accelerator would be able to replace iGPU in desktops or mobile GPU in the hi-end smartphones, at least in the near future. However, others do.
                  Originally posted by oiaohm View Post
                  Not at all more of their existing soc design parts could be transferred to risc-v cores. So would not be a new SoC from scratch. In fact it possible todo a new risc-v SoC from base parts in about 4 months. So with what Huawei has it could be a little as 6 months before chips start appearing.
                  Originally posted by oiaohm View Post
                  There are already open source IP for all that stuff for risc-v. Yes this stuff plugs into the risc-v chisel generation process. Reality here it possible to design and make a high end risc-v chip with luck in under 2 months yes my 6 months is to allow for a few failures.
                  Originally posted by oiaohm View Post
                  The reason why I said they had to do it this year is not the CPU the CPU of arm was replaceable by Risc-v designs Last year..
                  https://riscv.org/2019/02/phoronix-a...being-plotted/
                  2020 we should start seeing the Risc-V low power GPU start turning up.

                  So in 2020 Cortex-A core designs, as well Mali GPU get swapped with a full feature Risc-v and Custom feature Risc-v GPU comes highly likely this would be a core using open source hardware that is not restricted by government sanctions.
                  So, since you are already here, you should cut off these nonsense. Just make it clear that this is extremely unrealistic. Say it. Because it is F**KING IMPOSSIBLE! I would rather believe in a North Korean manned mission to the Moon in 2020 than this b******t.

                  I'm sorry that I raised myself, but I'm sick of trolls on this forum, talking about total nonsense and referring to your project as an example of powerful libre-licensed GPU for RISC-V (or any other ISA) mobile SoCs, that could compete with the latest Mali or Adreno GPUs in terms of performance and functionality. We both know that it is impossible.

                  However, I believe that it may be possible to make use of your project in the next 10 years when it comes to the GPU for domestic processors, e.g. Elbrus or Baikal. Currently, Russians tend to use very simple 2D GPUs from Taiwan. The Chinese already have their own GPUs for such purposes: JARI G12, GP101, JM7200, Elite (Elite 1000, Elite 2000), Chrome (Chrome 640/645 derivatives: ChromE 1000, Chrome 320, Chrome 860, Chrome 960), PowerVR (Imagination Technologies was acquired by the China-aligned private equity fund in 2017). However, if another country starts its technological independence program, it can use open hardware GPU, at least as a base.

                  Originally posted by lkcl View Post
                  so 7nm would reduce that 2.5 watts down to a scant 0.6 watts - 600mW.
                  It's still too high power consumption for smartwatches (unless you are totally okay with a watch that works for an hour and a half).
                  Anyway, tell me who will invest a lot of money into chips that are not competitive with the current ARM chips, both in terms of hardware and software. As you already know, the 7nm FinFET process is extremely expensive. That's why you don't use it unless it is really worth to do this. Even Intel stayed with 14nm for a long time due to the high cost of 10nm process.

                  Originally posted by lkcl View Post
                  I was talking about Flatpak. PureOS Store is supposed to be based around flatpaks. If you don't know how it works you should take a look at the official documentation. It doesn't use host libs at all. Every single flatpak-ed app uses a runtime. Currently, Flathub supports only Freedesktop, GNOME and KDE. They are available only on ARM (ARMv7 and AArach64) and x86 (x86-32 and x86-64). Even MIPS64 and PPC64LE are not supported here.
                  We would have a similar problem with Snap. This, in turn, is important for Ubuntu Touch.
                  Do you know what is the main difference between a smartphone and a basic cell phone? The first one allows the user to install apps. Without a functioning store, such a phone would be just a feature phone.
                  In addition to Ubuntu Touch and PureOS, we still have:
                  - Tizen - The latest smartphone with Tizen is Samsung Z4. It was released in 2017, so as we can guess, it is almost dead. Partially closed-source, so it is impossible to port it to RISC-V without the vendor's involvement.
                  - Sailfish OS - A complete business failure. Again, partially closed-source, the same story as above.
                  - KaiOS - The Firefox OS fork that no one cares about (maybe except India).
                  Of course, in a perfect world, it would be enough to just port AOSP (Android Open Source Project) to RISC-V, provide Android NDK for this architecture, and maybe create a binary translation layer similar to libhoudini (AArch64 to RISC-V). However, we already know that without Google's involvement it is almost impossible and they don't even think about RISC-V when it comes to mobile.
                  Last edited by the_scx; 05 June 2019, 04:05 PM.

                  Comment


                  • #49
                    Originally posted by kpedersen View Post
                    If it is faster than xf86-video-vesa and is able to give me a resolution native to my monitor then I do not see an issue whatsoever.
                    Is the 1280x720 really a native resolution of your monitor? I don't know what to say. I feel sorry for you...

                    Originally posted by kpedersen View Post
                    Not to mention a Geforce 6 from ~2004 is easily fast enough for users that don't waste resources on crapware like Gnome 3.
                    It is not anywhere near Geforce 6. It is somewhere between NVIDIA Riva 128 (1997) and GeForce 3 (2001), but with low max resolution.
                    And good luck with watching videos without hardware support for decoding H.264, H.265 and VP9. What is more, a lot of desktop programs use OpenGL to make drawing UI faster and OpenCL to speed up calculations. Here you have no support for OpenGL and OpenCL at all. XvMC, XvBA, VA-API, VDPAU and NVENC are not supported either. It focuses exclusively on the Vulkan API.

                    BTW: What's wrong with Intel graphics processors?
                    Last edited by the_scx; 05 June 2019, 03:12 PM.

                    Comment


                    • #50
                      Originally posted by profoundWHALE View Post
                      And yet people don't mind that NVIDIA has a features which renders the video at a decreased resolution, anti-aliases the crap out of it, in order to get better framerates so RTX is actually usable.
                      Yeah, but if you rely on idiots to sell you have already shown your true colors, isn't it?

                      We're talking about the mobile market.
                      No we are not. We aren't in 1999 anymore, this hardware is so far from being even remotely competitive that no sane manufacturer will EVER integrate it.

                      You need to understand that this is HARDWARE design. You can't go to a foundry and ask them to make a few dozens of your design, so even assuming they pull this off none will be able to manufacture this hardware at all, for sure not in mobile segment.

                      Even assuming they go anywhere near the 2.5w power consumption, in mobile that's higher than the whole goddamn system's power budget. Hell, Intel's integrated graphics on a laptop has a similar power budget.

                      This thing can ONLY work out as a hack of an existing RISC-V SoC that is mounted on a dedicated card you plug into some ancient FSF-certified board running libreboot or something.

                      Comment

                      Working...
                      X