Originally posted by lkcl
View Post
Announcement
Collapse
No announcement yet.
Libre RISC-V Snags $50k EUR Grant To Work On Its RISC-V 3D GPU Chip
Collapse
X
-
Originally posted by starshipeleven View PostI enjoy justice. People like you make the whole opensource and open hardware community look bad.
- Likes 2
Comment
-
Originally posted by the_scx View Post
This GPU (or rather the "Vulkan accelerator") does not even have the performance of NVIDIA graphics chips from 15 years ago.
Please just look at the spec:
Not to mention a Geforce 6 from ~2004 is easily fast enough for users that don't waste resources on crapware like Gnome 3.
- Likes 1
Comment
-
Originally posted by lkcl View Postthe one who believes it is their absolute right and mission to speak loudly and widely to cause other people to fail "for the good of others".
why don't you take over the project?
Comment
-
Originally posted by microcode View PostCool, good start for a third-party project. Esperanto Technologies prototyped a RISC-V based shader and they were able to get that working pretty quick. To make a competitive RISC-V shader core would take a sizable instruction set extension, but I wonder how big the extension would really need to be to make it just more useful than software rendering.
much of what we're doing is constrained - or more like "defined" by the Vulkan API. in that sense, it's a pretty straightforward "by the numbers" (haha) process. honestly though my primary focus as of right now is the Out-of-Order multi-issue Execution Engine that will form the basis of the Vector Processing to SIMD transparent conversion.
for the Vectorisation, just purely the FP32 side, we can get away with just a mere 2 FP32 ops per core per clock, to meet that 6 GFLOPs target. i suspect that what we will actually go with (unless it completely blows away the power figures) will be more like 4x FP32 ops per core per clock, giving double the numbers (12 FP32 GFLOPs). due to the way that the engine is designed, that would also give twice the FP16 numbers (25 FP16 GFLOPs).
Jacob is the one with the deep experience in 3D: as you know he wrote a 3D engine a couple years back, under GSoC. this Vulkan driver came out of that effort, and between us (and with Mitch's input) we are working out what can remain in software and what has to be done as custom opcodes.
summary: i don't honestly know the full answer: we'll find out, for sure, over the next few months, and i'm sure that there will be followup phoronix articles about what we end up discovering and learning
that, for me, is half of what this project is about: learning and then documenting and communicating to others what we discover, so that other people can not only learn, they can take it forward another step further. in that, i feel that we've already succeeded because the code, the documentation and all of the updates are there for anyone and everyone to see.
this transparency is particularly what NLNet were really happy with. they're keenly aware that software privacy only works if the hardware's fundamentally trustworthy. we don't *want* to be the ones that people come to to ask "can we trust this hardware", we'll tell them "don't ask us - no really, DON'T! find out for yourself! find a 3rd party independent auditor, run the formal verification suite for yourself, but definitely don't ask *us* if the hardware can be trusted!". i find this to be very funny
- Likes 5
Comment
-
Originally posted by kpedersen View PostIf it is faster than xf86-video-vesa and is able to give me a resolution native to my monitor then I do not see an issue whatsoever.
Not to mention a Geforce 6 from ~2004 is easily fast enough for users that don't waste resources on crapware like Gnome 3.
1280 x 720 25 fps is derpy on a laptop screen and just bad on a monitor.
A Geffo 6 from 2004 has max res 2048x1536@85 Hz
This thing will be pushing like 921'600 pixels vs 3'145'728 of the Geffo 6, that's around 3.4 times less pixels, and around 3 times less framerate.
It won't be able to run even old monitors no matter how hard you push it. Not the 4k ones, the FullHD ones.
Comment
-
Originally posted by starshipeleven View PostScreen res is a very big issue.
1280 x 720 25 fps is derpy on a laptop screen and just bad on a monitor.
We're talking about the mobile market. You're good at trolling on the internet but not in critical thinking.
- Likes 1
Comment
-
Originally posted by lkcl View Postit has to be pointed out that if you're comparing against desktop GPUs, you're just plainly not familiar with the embedded GPU market (and haven't read what i wrote only a couple of comments ago).
Anyway, I believe that this Vulkan accelerator is comparable to the mobile GPUs from 10 years ago (that's why I said that "maybe 10 years ago 5-6 GFLOPs wouldn't look so bad on the smartphone market") and desktop GPUs from 20 years ago. To be honest, it is very hard to compare the current GPUs to the chips from 15-25 years old. It is because graphics processing units have undergone tremendous evolution, from fixed function GPUs, through programmable GPUs, to Unified Shader Processors. That's why it is extremely hard to compare i.e. Riva TNT2 or GeForce2 Ultra with the modern GPUs that support Vulkan with SPIR-V shaders.
Originally posted by lkcl View Postthese are the numbers from Vivante GC800, which costs USD $250,000 to license, and has staggeringly good power efficiency.
Originally posted by lkcl View PostAdreno has a decade of incremental development behind it, and the financial resources of a billion-dollar company behind it (Qualcomm) that allow them to make 5 MILLION dollar iterative mistakes at a time, on failed tape-outs.
Originally posted by the_scx View PostArm Holdings has money to develop ISA and cores because they sell licenses. Qualcomm and MediaTek have money, because they sells a lot of chips. Where do the Chinese get the money to develop their processors? I mean, state funds aren't infinite and currently RISC-V CPUs aren't even competitive with 0.5-2 USD Cortex-M chips. So, where would you sell them? There are not many things where these CPUs would be suitable. Maybe some kind of digital photo frames, weather forecast stations or something like that? Sure, it should be possible to put these chips in some kind of home appliances (e.g. fridges or microwave ovens) and consumer electronics (like DVRs or Blu-ray disc players), but I bet that it is cheaper to just buy ARM chips than to port the Linux IoT platforms to RISC-V.
Maybe with multibillion investments each year, they would be able to achieve something in 10 years, but this is very unlikely to happen in my opinion.
A lot of Linux mobile platforms have failed due to both financial and technical reasons:
- Sailfish OS: After multiple issues with production of Jolla Tablet, financing and schedule a small number of devices were shipped in autumn 2015, while the company stated in December 2015 that due to the non-availability of necessary components, that production was to be stopped and the remaining backers should receive a full refund. As of 23 December 2016, Jolla had still not provided a timeline to fully refund backers, stating "we will commence with that [refund] as our financial situation allows." (Wikipedia)
- Ubuntu Touch: The Ubuntu Edge fell short of its funding goal, raising only $12,809,906, with 5682 pledges to purchase the standard model of the handset. (Wikipedia)
It is really hard to believe that someone will put a lot of money, just like that, for almost hobbyist project, that has no economic sense.
Originally posted by lkcl View Postyou missed it, twice, i will say it again: we are *deliberately* going for a lower performance (with a lower power budget)
Originally posted by oiaohm View PostNot at all more of their existing soc design parts could be transferred to risc-v cores. So would not be a new SoC from scratch. In fact it possible todo a new risc-v SoC from base parts in about 4 months. So with what Huawei has it could be a little as 6 months before chips start appearing.Originally posted by oiaohm View PostThere are already open source IP for all that stuff for risc-v. Yes this stuff plugs into the risc-v chisel generation process. Reality here it possible to design and make a high end risc-v chip with luck in under 2 months yes my 6 months is to allow for a few failures.Originally posted by oiaohm View PostThe reason why I said they had to do it this year is not the CPU the CPU of arm was replaceable by Risc-v designs Last year..
https://riscv.org/2019/02/phoronix-a...being-plotted/
2020 we should start seeing the Risc-V low power GPU start turning up.
So in 2020 Cortex-A core designs, as well Mali GPU get swapped with a full feature Risc-v and Custom feature Risc-v GPU comes highly likely this would be a core using open source hardware that is not restricted by government sanctions.
I'm sorry that I raised myself, but I'm sick of trolls on this forum, talking about total nonsense and referring to your project as an example of powerful libre-licensed GPU for RISC-V (or any other ISA) mobile SoCs, that could compete with the latest Mali or Adreno GPUs in terms of performance and functionality. We both know that it is impossible.
However, I believe that it may be possible to make use of your project in the next 10 years when it comes to the GPU for domestic processors, e.g. Elbrus or Baikal. Currently, Russians tend to use very simple 2D GPUs from Taiwan. The Chinese already have their own GPUs for such purposes: JARI G12, GP101, JM7200, Elite (Elite 1000, Elite 2000), Chrome (Chrome 640/645 derivatives: ChromE 1000, Chrome 320, Chrome 860, Chrome 960), PowerVR (Imagination Technologies was acquired by the China-aligned private equity fund in 2017). However, if another country starts its technological independence program, it can use open hardware GPU, at least as a base.
Originally posted by lkcl View Postso 7nm would reduce that 2.5 watts down to a scant 0.6 watts - 600mW.
Anyway, tell me who will invest a lot of money into chips that are not competitive with the current ARM chips, both in terms of hardware and software. As you already know, the 7nm FinFET process is extremely expensive. That's why you don't use it unless it is really worth to do this. Even Intel stayed with 14nm for a long time due to the high cost of 10nm process.
Originally posted by lkcl View Postoink?
GNOME on RISC-V: https://fedoraproject.org/wiki/Archi...xpansion_board
Plasma Mobile on RISC-V https://www.reddit.com/r/kde/comment...iscv_hardware/
We would have a similar problem with Snap. This, in turn, is important for Ubuntu Touch.
Do you know what is the main difference between a smartphone and a basic cell phone? The first one allows the user to install apps. Without a functioning store, such a phone would be just a feature phone.
In addition to Ubuntu Touch and PureOS, we still have:
- Tizen - The latest smartphone with Tizen is Samsung Z4. It was released in 2017, so as we can guess, it is almost dead. Partially closed-source, so it is impossible to port it to RISC-V without the vendor's involvement.
- Sailfish OS - A complete business failure. Again, partially closed-source, the same story as above.
- KaiOS - The Firefox OS fork that no one cares about (maybe except India).
Of course, in a perfect world, it would be enough to just port AOSP (Android Open Source Project) to RISC-V, provide Android NDK for this architecture, and maybe create a binary translation layer similar to libhoudini (AArch64 to RISC-V). However, we already know that without Google's involvement it is almost impossible and they don't even think about RISC-V when it comes to mobile.Last edited by the_scx; 05 June 2019, 04:05 PM.
- Likes 1
Comment
-
Originally posted by kpedersen View PostIf it is faster than xf86-video-vesa and is able to give me a resolution native to my monitor then I do not see an issue whatsoever.
Originally posted by kpedersen View PostNot to mention a Geforce 6 from ~2004 is easily fast enough for users that don't waste resources on crapware like Gnome 3.
And good luck with watching videos without hardware support for decoding H.264, H.265 and VP9. What is more, a lot of desktop programs use OpenGL to make drawing UI faster and OpenCL to speed up calculations. Here you have no support for OpenGL and OpenCL at all. XvMC, XvBA, VA-API, VDPAU and NVENC are not supported either. It focuses exclusively on the Vulkan API.
BTW: What's wrong with Intel graphics processors?Last edited by the_scx; 05 June 2019, 03:12 PM.
- Likes 1
Comment
-
Originally posted by profoundWHALE View PostAnd yet people don't mind that NVIDIA has a features which renders the video at a decreased resolution, anti-aliases the crap out of it, in order to get better framerates so RTX is actually usable.
We're talking about the mobile market.
You need to understand that this is HARDWARE design. You can't go to a foundry and ask them to make a few dozens of your design, so even assuming they pull this off none will be able to manufacture this hardware at all, for sure not in mobile segment.
Even assuming they go anywhere near the 2.5w power consumption, in mobile that's higher than the whole goddamn system's power budget. Hell, Intel's integrated graphics on a laptop has a similar power budget.
This thing can ONLY work out as a hack of an existing RISC-V SoC that is mounted on a dedicated card you plug into some ancient FSF-certified board running libreboot or something.
Comment
Comment