Announcement

Collapse
No announcement yet.

Unified Parallel C (UPC) Proposed For GCC 4.8

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Unified Parallel C (UPC) Proposed For GCC 4.8

    Phoronix: Unified Parallel C (UPC) Proposed For GCC 4.8

    A proposal has went out to merge support for GUPC, the GNU Unified Parallel C branch, into the forthcoming GCC 4.8 compiler code-base...

    http://www.phoronix.com/vr.php?view=MTE2Njg

  • #2
    So how is this stuff better than OpenCL and other APIs out there?

    Comment


    • #3
      Originally posted by mark45 View Post
      So how is this stuff better than OpenCL and other APIs out there?
      OpenCL is not network aware. UPC, Co-Array Fortran, and Titanium all can use message passing to transfer data amongst elements of a compute cluster.

      For background information, start here.
      http://en.wikipedia.org/wiki/Partiti..._address_space

      Comment


      • #4
        Originally posted by mark45 View Post
        So how is this stuff better than OpenCL and other APIs out there?
        OpenCL is for stream processing and FP calculations. UPC is more like MPI; it's for usual C code.

        Comment


        • #5
          In future we can expect 50 and more cores per CPU, so this will enable programs to use it more easily, I guess. GPU computing (OpenCL) is far more limited, especially when it comes to manipulating large data sets that cannot be divided to little pieces (few KB).

          Still, what I am waiting for in parallel field is "linked variable" approach, that I invented, but nobody else unfortunately for humanyty could ever learn about it: :-) until now, that is. In short, it would be possible to link variable(s) from one class, with variable(s) of other (or same) class, in a way that physically both would be same variable on the same location, example:

          Class Father {
          Son s;
          int age linked_to s::father_age;
          }

          Class Son {
          Father f;
          int father_age linked_to f::age;
          }

          Changing this variable in either class would also trigger function "<varname>_changed_externally()" in linked class.

          This simple approach would enable easy development of massively parallel neural networks (thousands, later maybe millions of neurons working in parallel) on chip with special design: It would consist of thousands of little "cores", so that each class fits into one core (in this case class Father is inside one core and class Son in another). Each core can run normal C/C++ code, albeit have very little memory (1 KB?). So, application would first setup whole net of classes and links, thousands of them or more, and then computation would run inside the little cores. Cores would be also capable to re-arrange connections (via linked variables). Hardware design would be really difficult, but it would improve over times and eventually enable stuff that are impossible now (like accurately simulating whole human brain).

          Comment


          • #6
            Originally posted by mirza View Post
            Still, what I am waiting for in parallel field is "linked variable" approach, that I invented, but nobody else unfortunately for humanyty could ever learn about it: :-) until now, that is. In short, it would be possible to link variable(s) from one class, with variable(s) of other (or same) class, in a way that physically both would be same variable on the same location, example:

            Class Father {
            Son s;
            int age linked_to s::father_age;
            }

            Class Son {
            Father f;
            int father_age linked_to f::age;
            }

            Changing this variable in either class would also trigger function "<varname>_changed_externally()" in linked class.
            I guess I didn't get it, because I'm about to say "ever heard of static members?" :-P

            Comment


            • #7
              Originally posted by RealNC View Post
              I guess I didn't get it, because I'm about to say "ever heard of static members?" :-P
              static have same value for all objects of same type. This variable would share value only with _instances_ you specifically set. If you think about neuron, it shares data with one (or few) other neuron(s). When you change value on output variable, it automatically changes value of their input variable, because it is same synapse. Also, important thing is that changing synapse value triggers function that runs in own thread (and core), so there is no need for "synchronization", data sharing etc. Basically, it is fix for current incompatibility between how neural network works and how CPU works, while keeping with programmable C/C++ paradigm - avoiding fully hardware solution, which would be much more difficult and expensive to "get right".

              Comment


              • #8
                Originally posted by mirza View Post
                static have same value for all objects of same type. This variable would share value only with _instances_ you specifically set. If you think about neuron, it shares data with one (or few) other neuron(s). When you change value on output variable, it automatically changes value of their input variable, because it is same synapse. Also, important thing is that changing synapse value triggers function that runs in own thread (and core), so there is no need for "synchronization", data sharing etc. Basically, it is fix for current incompatibility between how neural network works and how CPU works, while keeping with programmable C/C++ paradigm - avoiding fully hardware solution, which would be much more difficult and expensive to "get right".
                Hm, then those would be references. I'm sure I still don't get it :-P The way I understood this, you want:

                Code:
                class Father {
                public:
                    class Son* son;
                    int age;
                
                    Father(Son* s, int a) : son(s), age(a) { }
                };
                
                class Son {
                public:
                    Father* father;
                    int& father_age;
                
                    Son(Father* f) : father(f), father_age(f->age) { }
                };

                Comment


                • #9
                  Originally posted by RealNC View Post
                  Hm, then those would be references. I'm sure I still don't get it :-P The way I understood this, you want:

                  Code:
                  class Father {
                  public:
                      class Son* son;
                      int age;
                  
                      Father(Son* s, int a) : son(s), age(a) { }
                  };
                  
                  class Son {
                  public:
                      Father* father;
                      int& father_age;
                  
                      Son(Father* f) : father(f), father_age(f->age) { }
                  };
                  Yes, thats exactly it, except:
                  - on "my" architecture, all classes are running on own core, not on one (or several) classical CPU cores, therefore it is massively parallel.
                  - Variable "link" is different then reference in a way that changing it triggers function of linked object onChanged() (like RPC call). Normally, from one class, you cannot run functions of other class (because it is on different core, running in own thread).
                  - Variable "links" shoud be implemented in hardware. That means, each core have shared memory with 8 neighbouring cores. All coreas are layed as giant 2D grid of cores, with possibility to create shortcuts to faraway cores (perhaps on other "floor" of the chip, which can have configurable shortcut connections, which is block of memory shared by _any_ two cores). This is like high level synapse that connects separate functional parts of neural networks. Single neural network can be established by allocating sub-grid of cores, depending on functional unit complexity.

                  Comment

                  Working...
                  X