Announcement

Collapse
No announcement yet.

Paradigm Shift

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Paradigm Shift

    Imagine with me for a second if, to use a dual-core CPU, you had to install proprietary drivers, keep track of an expansive API, configure a bunch of stuff to not break, and when all is said and done, all the second core could be used for is running flash, Java, and AJAX. You're lucky it even works with your operating system.

    ...It would make for one blazing fast high definition web browsing experience, right?

    So there you are, you paid an extra $200 for this thing, you went through hades just to get it working, and you doubled the power of your machine.... but it can't en/de-code, it can't (de)compress, it can't en/de-crypt, it can't fold proteins, do physics...

    Well, it /could/ if you did it in JAVA or flash, but... only as well as the driver perfoms of course.

    Now imagine that people said 'Hey, this is stupid. Why am I wasting all that silicon?' and started making dual core processors that only needed an SMP-enabled kernel and could do anything you wanted at all. That would be revolutionary, wouldn't it?

  • #2
    To the chase:

    I don't like the way we think of graphics cards at all; I think it's a very short sighted approach to computing. The thought of adding an PhysX card to the mix, or a hardware video encoder on a GPU, or say, an X-Fi sound card should serve to make this more apparent. What are we doing?

    That's just the question, isn't it? We don't know-- and we can't assume that we can. I can give you some /examples/ of what I do with my processing power, but five years from now, I couldn't tell you.

    I think the solution is asymmetric processing. A system that designed correctly should have one core with a huge instruction set, and a huge memory addressing capability, with hundreds of other cores with neither.

    I think you may be familiar with those people trying to make an Open GPU. We need a card that is nothing but a video and audio adapter, but one with every output known to man, at insane resolutions. A PCI RAMDAC.
    After that,

    Make a 128 vector risc core co-processor on a PCIe 32x card.
    Create a gcc or LLVM backend, and
    make mesa.
    make gstreamer, xine, and so on.
    make folding at home.
    make yafray.
    make bullet.

    Now, I'm not sure, but I think that the kernel modules for scheduling tasks on this thing would be much smaller than the (sometimes binary) ones used to drive 'modern' GPU's?

    Any thoughts?

    Comment


    • #3
      An opportunity

      The Open GPU project:


      The Freedom CPU project:


      Now, while I like to think I kinda know /how/ things work, I've never made so much as a GUI app myself, so honestly, I'm kind of powerless to /do/ anything about my ideas but put them out there for the rest of you. If they're sound ideas, I'd be encouraged to hear as much, and if they're not, I'd appreciate knowing why, so I can learn, and improve them. I'll present a more comprehensive plan of action here:

      1. A freely licensed instruction set needs to be developed with the following traits:
      --a simple standalone instruction set at it's core
      --a capable vector math instruction set on top
      --an expanded instruction set for larger processor
      -If the instruction set is well designed, it could be that a company like freescale, VIA, or even AMD might pick up on it.
      2. A back end for an gcc or llvm needs to be made for the developed instruction set. Before moving on, only the base instruction set and vector instruction sets need to be fully stable.
      -the idea is that the base and vector instruction sets can be implemented now with x86, PPC, sparc, etc. systems, and later, a full 64 bit CPU can be created that can run the parallel code in the event that a co-processor is not available. (but performance won't be great at all.)
      3. Linux kernel modules need to be developed that can take code compiled for non-native instruction sets, and send it to a PCI express co-processor for execution, and manage scheduling of and memory for such tasks.
      4. Two cards need to be developed: a simple PCIe or PCI video and audio out/input card with open specs, and a PCIe 32x card with a power-of-two sized array of vector processors and .5/1 GB of onboard L3, along the lines of DDR3 or rambus.
      5. Binaries need a way to indicate to the kernel that they must be executed on the co-processor array card.
      6. Mesa (a software OpenGL implementation I think) and various userland software will need to be modified to take advantage of the many risc cores.

      What am I getting right? What am I completely missing? Who do I talk to?
      ...and how many of you want to see one of these suckers blend?

      Comment


      • #4
        there was an interesting post on flameeyes' blog (a gentoo developer) who wrote "what if opengl standard was proprietary?"

        link

        not really the exact same topic, but it's pretty close to the discussion.

        Comment


        • #5
          hmm

          I read the link.. that sounds dangerous. For this to work, that would have to be fixed.

          ...but for graphics on the platform I'm talking about, OpenGL is a layer of abstraction --but /not/ necessary for graphics.

          Previously, OpenGL has been the only way for us to capture the power of graphics cards. I'm talking about the end of that, without just going into another round of proprietary drivers and CUDA and GPU firmware and whatnot. For it to go anywhere though, we do need a full Free software OpenGL implementation. It seems we don't truly have one as I thought we did... which is a very serious matter.

          Comment


          • #6
            I guess I should have

            posted this thread in a different spot. This isn't really /off/ topic, but I didn't know where else to put it. If some moderator could move it where they think it belongs, I'd appreciate it.

            So far I got one response, and I know there's more of you out there with /some/ kind of opinion.

            Ok, do we have any blender users here?
            ...anyone? They'd be the first to see why I'm saying this. Okay then, I know a lot of the people here have done a simple

            ./configure
            make
            sudo make install

            You remember how long it took? Imagine that you could cut that down to 1/64th the time with a $300 card. Now, I don't know that you can, because compiling is probably more serial of a task, but you know, I don't make gcc code commits; I don't know how that works-- it just may be that you can. Certainly if gcc is optimized to compile multiple source files at once when it has complete makefiles... you know?

            People are saying that the future of computing is multiple CPU cores. I'm saying it's in making better use of the transistor counts we already have, and I do think it will be truly revolutionary. Does anyone else see what I see?

            Comment


            • #7
              I'll make things simpler for you.

              Go to psubuntu.com

              Tell me what you see.

              Go to #ps3linux on freenode

              Tell me who you meet.

              Go to #ps3dev on freenode

              Tell me what you've done.

              ...or heck, just go nowhere and tell me what you think.

              Comment

              Working...
              X