Alibaba T-Head TH1520 RISC-V CPU & A Few New Arm SoCs Ready For Linux 6.5

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • tuxd3v
    replied
    Originally posted by jacob View Post
    Given the long time RISC-V has been around, I don't think it's fair to call expectations regarding competitive high performance cores "sudden". The fact is that to the best of my knowledge there is not even any development or credible project. You have a good point regarding China, although I expect that their default choice will be Loongson.
    we need to realize that only in 2022, 16 extensions where ratified..
    Its too soon to expect high performance, and features out of riscv cores..

    Loongson, its the best option now, because its in the development for 20 years now.But for microcontrollers, etc there are already plenty of options.

    Leave a comment:


  • jacob
    replied
    Originally posted by coder View Post
    I'm not sure where you got the idea that ultra high-performance RISC-V CPUs would suddenly materialize to contend for your workstation, but I think that was an unrealistic expectation. Just look at how long it's taken ARM to get there.

    There's light over the horizon, though. The natural evolution would first have it being a serious contender in embedded Linux and phones. From there, servers and chromebooks are the next logical step. Then, we might expect to see competitive laptops. Finally, workstations and desktops.

    The main thing that could hasten the transition is China. Having seen how the US used IP rights to shut down ARM licensees, I think has steered them away from that path. They're going in big on RISC-V.

    Second to that is how ARM is trying to squeeze Qualcomm over Nuvia's architectural license. That could kill the market for SoCs or CPUs with custom-designed ARM cores, such as anything Intel or AMD might design (because ARM is trying to assert that the downstream customer needs to buy a license to use those chips). This could push Qualcomm, Intel, AMD, etc. over to RISC-V, instead of going with ARM.

    I'm not going to guess what the timeline could be.
    Given the long time RISC-V has been around, I don't think it's fair to call expectations regarding competitive high performance cores "sudden". The fact is that to the best of my knowledge there is not even any development or credible project. You have a good point regarding China, although I expect that their default choice will be Loongson.

    Leave a comment:


  • coder
    replied
    Originally posted by jacob View Post
    But that's something that we've been hearing for the past 10 years or so. There is NO ecosystem. There is NO RISC V-based hardware anywhere to be seen and NONE has even been announced that is not vapourware. I'm not asking about ultra low-end microcontrollers or mobile-oriented SoC's that are barely comparable to ARM equivalents from three or four generations ago. I'm talking about workstation-grade systems that are comparable to Core i7s or i9s and Ryzens in ABSOLUTE performance (not per-cycle or per-watt). They simply don't and won't exist.
    I'm not sure where you got the idea that ultra high-performance RISC-V CPUs would suddenly materialize to contend for your workstation, but I think that was an unrealistic expectation. Just look at how long it's taken ARM to get there.

    There's light over the horizon, though. The natural evolution would first have it being a serious contender in embedded Linux and phones. From there, servers and chromebooks are the next logical step. Then, we might expect to see competitive laptops. Finally, workstations and desktops.

    The main thing that could hasten the transition is China. Having seen how the US used IP rights to shut down ARM licensees, I think has steered them away from that path. They're going in big on RISC-V.

    Second to that is how ARM is trying to squeeze Qualcomm over Nuvia's architectural license. That could kill the market for SoCs or CPUs with custom-designed ARM cores, such as anything Intel or AMD might design (because ARM is trying to assert that the downstream customer needs to buy a license to use those chips). This could push Qualcomm, Intel, AMD, etc. over to RISC-V, instead of going with ARM.

    I'm not going to guess what the timeline could be.

    Leave a comment:


  • jacob
    replied
    Originally posted by ayumu View Post

    That chip has boards that are cheaper than Raspberry Pi 4 or 400, and is faster than them.

    Regardless, it isn't meant for you, but for developers who want to help get the RISC-V software ecosystem ready for the masses.
    But that's something that we've been hearing for the past 10 years or so. There is NO ecosystem. There is NO RISC V-based hardware anywhere to be seen and NONE has even been announced that is not vapourware. I'm not asking about ultra low-end microcontrollers or mobile-oriented SoC's that are barely comparable to ARM equivalents from three or four generations ago. I'm talking about workstation-grade systems that are comparable to Core i7s or i9s and Ryzens in ABSOLUTE performance (not per-cycle or per-watt). They simply don't and won't exist.

    Leave a comment:


  • tuxd3v
    replied
    Originally posted by coder View Post
    Well, if this CPU is only about advancing the ecosystem, then they should've prioritized features over performance.

    However, for all we know, the actually tried to implement FPE, but there was some chip bug and they decided just to disallow them instead of respinning the chip. I think they're not simple to implement, on a pipelined CPU.
    .
    I think that riscv, has a bright future, but its the new kid around the block, everything needs to be done from the ground up, and I think RISCV is doing an amazing job, to be frank, I don't complain about that.
    We just need to understand that they ratified a lot of extensions in 2022...now we need to wait for hardware that comply with it.And all the companies involved in it, are developing, and doing an amazing work.

    And its indeed very good, to see new hardware with this ISA
    yeah,FPUs are very nasty, and exceptions are difficult...sooner or later we will get them.
    Originally posted by coder View Post
    I'm not really sure what you're complaining about. Is it that compile-time warning? These functions do also provide return codes that indicate whether they worked, but I guess the libc maintainers thought that wasn't good enough.
    Well,
    The warning from the compiler is mind blogging, I didn´t tried to understand why..because it should just work, other features like the gnu '-fsignaling-nans' should be something that ...just works, but its still considered "experimental".
    30 years in development..always focused in x86/amd64, and there are still areas where some things still fail.

    Can we blame riscv, for not having exceptions, when 30 years old things still don´t have complete support?I think we can't.
    I think each project has its own goals, with people coming and going, those goals are very difficult to manage, and..we need to manage our own expectations.

    Anyway, I am excited that new hardware is coming, to help us progress to a global ISA, that has the possibility to unite us all, and simplify development, since we no longer will need to be worried with zillions of things from other archs, incompatibilities, etc...its a dream that can be true.
    Other arch's will always exist, because of specific features they have, and also because there are a lot of software developed with them in mind.
    It will maybe be the biggest challenge for riscv...the optimization process...

    Leave a comment:


  • coder
    replied
    Originally posted by tuxd3v View Post
    You are misunderstanding my position..
    I think that any Hardware that wants to be taken seriously does need to have hardware exceptions.
    However current hardware is made to enable people to advance riscv ecosystem..its different, they have completely different goals..
    Well, if this CPU is only about advancing the ecosystem, then they should've prioritized features over performance.

    However, for all we know, the actually tried to implement FPE, but there was some chip bug and they decided just to disallow them instead of respinning the chip. I think they're not simple to implement, on a pipelined CPU.

    Originally posted by tuxd3v View Post
    The tradition is to shift the dirty to "behind the carpet", shifting the problem to the client programmer..
    I think that archs have the responsibility to provide ways of detection...what is supported and what is not..
    I'm not really sure what you're complaining about. Is it that compile-time warning? These functions do also provide return codes that indicate whether they worked, but I guess the libc maintainers thought that wasn't good enough.

    Leave a comment:


  • tuxd3v
    replied
    Originally posted by coder View Post
    WTF? You call that "broken by design", but then it's somehow okay that this CPU doesn't implement FP exceptions?
    You are misunderstanding my position..
    I think that any Hardware that wants to be taken seriously does need to have hardware exceptions.
    However current hardware is made to enable people to advance riscv ecosystem..its different, they have completely different goals..

    Originally posted by coder View Post
    If libc is intended to support even CPUs which don't implement FP exceptions, then defaulting FP exceptions to disabled is actually the correct choice! That way, you get the same behavior on both.
    The tradition is to shift the dirty to "behind the carpet", shifting the problem to the client programmer..
    I think that archs have the responsibility to provide ways of detection...what is supported and what is not..

    Leave a comment:


  • coder
    replied
    Originally posted by tuxd3v View Post
    Well, fe exceptions seems to be broken...by design maybe, see this.
    WTF? You call that "broken by design", but then it's somehow okay that this CPU doesn't implement FP exceptions?

    If libc is intended to support even CPUs which don't implement FP exceptions, then defaulting FP exceptions to disabled is actually the correct choice! That way, you get the same behavior on both.

    Leave a comment:


  • coder
    replied
    Originally posted by kreijack View Post
    So it seems that neither x86_64 nor risc-v trap in case of division by zero if the operation is between floating number. Different in case of integer number..
    On Linux/x86-64 the default behavior is not to generate floating point exceptions. You can override this using the API in fenv.h:
    Code:
    #include <fenv.h>
    
    ...
        feenableexcept(FE_DIVBYZERO);
    The result is you'll get a SIGFPE (8). You can even register a signal handler for this:
    Code:
    #include <signal.h>
    
    ​void sighandler(int n)
    {
        printf("Got signal: %d\n", n);
        fflush(stdout);
        exit(1);
    }
    
    ​...
        signal(SIGFPE, &sighandler);
    If you set the signal handler without enabling FE_DIVBYZERO, the program will execute until idiv(1, 0), before you get a SIGFPE. If you first enable FE_DIVBYZERO, then it'll raise a SIGFPE immediately when you do your first fdiv(1, 0).

    In either case, the signal appears to be fatal. I tried feclearexcept(FE_DIVBYZERO) inside the signal handler (and removing the exit()), but it made no difference. The program just got stuck in a loop, continually calling the signal handler. If you figure out how to clear this state and continue executing, let me know. Not that I really want to do such a thing, but it'd be interesting to know.

    Leave a comment:


  • uxmkt
    replied
    Originally posted by coder View Post
    The main reason is as a development vehicle, and for that it's fast enough.
    That depends on the type of development.

    Leave a comment:

Working...
X