Announcement

Collapse
No announcement yet.

CLike: A New, "Simple C-Like" Programming Language

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    i worked with multisim before
    and ye that example is horrible, but it is not what i thought

    example those true/false boxes i presume need all the connections to be true/false
    i was thinking about more like standard logic nodes

    in the case of only nodes there would be a bunch of lines going to one node
    and by clicking or hovering the mouse above that node, the prerequisites would line up on the left side (or in another window or something like that)
    so there would be two versions of visualization, one as a grand overview and one per node
    also nodes could be packed to form new nodes with custom i/o

    so something between blender's node editor, altium and graphwiz
    (another one, an unreal's kismet one and c4's graphical scripting language)
    with transparency, maybe even shadows and depth, to make logic paths clearer
    so it would have to be opengl since any older cpu would have problems rendering it while also changing the layout

    for example, i have a program in ~16 files
    many functions/macros doing relatively simple things
    that leads to the main file doing much with a small amount of lines
    with logic depth maximum of around 4-5 i think it would map well to nodes, except for the lowest level ones that do lower level things

    so some things would give huge graphs but why not mix 1D and 2D


    but ye, i agree
    bad (logic) programming in a visual language would look way more horrid then in 1D
    if that is a bad thing, idk


    PS still AI = compiler, so there is AI already
    just not the SF one

    Comment


    • #22
      Originally posted by gens View Post
      example those true/false boxes i presume need all the connections to be true/false
      i was thinking about more like standard logic nodes

      in the case of only nodes there would be a bunch of lines going to one node
      and by clicking or hovering the mouse above that node, the prerequisites would line up on the left side (or in another window or something like that)
      so there would be two versions of visualization, one as a grand overview and one per node
      also nodes could be packed to form new nodes with custom i/o
      Yes, that is how labview works already (the "new nodes with custom i/o" are just called "functions"). But this doesn't really solve any of the problems I mentioned.

      Further, it requires everyone always follow best coding practices, or that you never share any code with anyone else. Neither of these is the case. No matter how cleanly you write your code, if you share code you are going to get code like the examples I showed you. And looking at that code, it wouldn't very easy to write it in terms of functions (not without major refactoring at least).

      Finally, it doesn't really scale. For more complex programs, there are going to be fairly large, complex files. No matter how careful you are to compartmentalize things, complex tasks require complex code, and it is not always possible to break a complex task into a lot of simpler tasks.

      Think about it this way: there is a reason that using "GOTO" statements is considered bad programming practice. Visual programming languages, however, are basically a big collection of "GOTO" statements.

      Originally posted by gens View Post
      for example, i have a program in ~16 files
      many functions/macros doing relatively simple things
      that leads to the main file doing much with a small amount of lines
      with logic depth maximum of around 4-5 i think it would map well to nodes, except for the lowest level ones that do lower level things
      That is an extremely simple program. Yes, you might be able to get away with graphical programming with something like that, but even then it is almost certainly going to be more complicated and harder to follow than a text-based programming.

      Originally posted by gens View Post
      PS still AI = compiler, so there is AI already
      I doubt anyone in the AI community would agree with that definition. Compilers can only do what they are told to do.

      Comment


      • #23
        Originally posted by TheBlackCat View Post
        Finally, it doesn't really scale. For more complex programs, there are going to be fairly large, complex files. No matter how careful you are to compartmentalize things, complex tasks require complex code, and it is not always possible to break a complex task into a lot of simpler tasks.

        Think about it this way: there is a reason that using "GOTO" statements is considered bad programming practice. Visual programming languages, however, are basically a big collection of "GOTO" statements.


        That is an extremely simple program. Yes, you might be able to get away with graphical programming with something like that, but even then it is almost certainly going to be more complicated and harder to follow than a text-based programming.


        I doubt anyone in the AI community would agree with that definition. Compilers can only do what they are told to do.
        goto is considered bad because some people misuse it (that, in my opinion, do not understand what they are doing)
        it is actually useful in some cases and handy in a couple other cases (and is hardwired in the cpu as "jmp" opcode)
        very useful for programs that have a "flow", like for network protocols, resulting in _very_ fast execution times and small binaries
        moreover when someone says literally "goto is bad" i think to myself "hes an idiot"
        (can't really blame people on that one since they are thought like that in schools)

        as for my program, i wouldn't call a 3D game engine as extremely simple
        but i did spend way more time thinking about the design then coding and most of the time coding in refactoring
        i do still need to add scripting via a custom compiler (instead of just writing code) and advanced physics, but that won't add to the complexity of the high level part

        to make an analogy with electronics
        some people use fkin arduinos to make an simple vu meter (with LEDs)
        same thing can be done with two transistors, a few zeners and some resistors
        in programming that would be like using huge libraries (example BOOST) to make a simple program


        a compiler does thinking in how to allocate registers, "vectorize" math and such
        it has knowledge about languages
        and if you can call profile guided optimization as learning, well
        funny is that even the simplest machines fall into "automata", yet that term is now used only for complex ones

        Comment


        • #24
          Originally posted by gens View Post
          goto is considered bad because some people misuse it (that, in my opinion, do not understand what they are doing)
          it is actually useful in some cases and handy in a couple other cases (and is hardwired in the cpu as "jmp" opcode)
          very useful for programs that have a "flow", like for network protocols, resulting in _very_ fast execution times and small binaries
          moreover when someone says literally "goto is bad" i think to myself "hes an idiot"
          (can't really blame people on that one since they are thought like that in schools)
          In this context, goto is bad because it makes it harder to keep track of the flow of the language. Yes, it can provide some optimization, but we are talking purely in terms of structure of the language here, and got statements it makes it harder to keep track of what is going on when used extensively (spaghetti code). Visual programming languages can never produce anything other than spaghetti code, it is inherent in the nature of the paradigm.

          Originally posted by gens View Post
          as for my program, i wouldn't call a 3D game engine as extremely simple
          I would call any program with 16 files small, unless the file are absolutely massive files with tens of thousands of lines of code each, in which case the same problems apply.

          But this is irrelevant to my point, which is that there are inherent problems in visual programming languages that have prevented them from becoming widespread despite existing for nearly half a century.


          Originally posted by gens View Post
          a compiler does thinking in how to allocate registers, "vectorize" math and such
          it has knowledge about languages
          It can only do these things if it is specifically programmed when and how to do them. A compiler doesn't come up with new optimization strategies on its own, it simply applies the ones it was programmed to apply.

          Originally posted by gens View Post
          and if you can call profile guided optimization as learning, well
          I wouldn't, in fact I would call it the exact opposite of learning. Learning, both in its human context and in an AI context, refers to developing new generalizable rules and principles. For example in machine learning, you provide the algorithm with a limited data-set, and it then develops general rules that can then be applied to a larger data set it hasn't seen before.

          With profile-guided optimized, the knowledge gained is only applied to the code profiled. If the profiling provided the compiler with new compiling rules that improved its performance on other code it hadn't seen before, then I would call that learning. But just detecting the sorts of things a program will do and applying pre-determined rules to that isn't learning in the AI sense of the word.

          Comment


          • #25
            Originally posted by TheBlackCat View Post
            In this context, goto is bad because it makes it harder to keep track of the flow of the language.

            I would call any program with 16 files small, unless the file are absolutely massive files with tens of thousands of lines of code each, in which case the same problems apply.

            But this is irrelevant to my point, which is that there are inherent problems in visual programming languages that have prevented them from becoming widespread despite existing for nearly half a century.

            I wouldn't, in fact I would call it the exact opposite of learning. Learning, both in its human context and in an AI context, refers to developing new generalizable rules and principles. For example in machine learning, you provide the algorithm with a limited data-set, and it then develops general rules that can then be applied to a larger data set it hasn't seen before.
            id argue that it makes it simpler as instead of having multiple functions that do progressively less you have one function that does it all
            unless you never seen a label and don't know how pointers work
            in fact you can write any kind of program using goto instead of subroutines and most of them would be perfectly readable
            again, in case of parsing (example network protocols) you DO get a clearer picture as you are writing a state machine
            example http://galos.no-ip.org/sdhcp

            you didn't call it small, you called it simple
            bigger does not mean better, as proper coding will almost always result in less code
            hence i support the notion of "a more valuable programmer is one that removed more code" (can't find where i read it originally)

            yes visual programming has existed for a while now
            but computer were not able to efficiently render transparency, let alone animations and reordering


            as for learning
            we learn by copying others, amongst other things
            from wiki: "Learning is the act of acquiring new, or modifying and reinforcing..."
            PGO is refining, no ?

            Comment


            • #26
              Originally posted by gens View Post
              id argue that it makes it simpler as instead of having multiple functions that do progressively less you have one function that does it all
              unless you never seen a label and don't know how pointers work
              in fact you can write any kind of program using goto instead of subroutines and most of them would be perfectly readable
              again, in case of parsing (example network protocols) you DO get a clearer picture as you are writing a state machine
              example http://galos.no-ip.org/sdhcp

              you didn't call it small, you called it simple
              bigger does not mean better, as proper coding will almost always result in less code
              hence i support the notion of "a more valuable programmer is one that removed more code" (can't find where i read it originally)

              yes visual programming has existed for a while now
              but computer were not able to efficiently render transparency, let alone animations and reordering
              None of this is relevant to my point. I listed some specific problems that I think have prevented visual programming from becoming mainstream and will continue to do so. Transparency and animation will not correct those issues (and, actually, labview does have optional animations). If you have some approach that could correct for those issues, I would love to hear it. But as long as those issue exist, I don't think visual programming will extend much beyond the niche it already has.

              Originally posted by gens View Post
              as for learning
              we learn by copying others, amongst other things
              from wiki: "Learning is the act of acquiring new, or modifying and reinforcing..."
              PGO is refining, no ?
              Also from wiki "Learning produces changes in the organism and the changes produced are relatively permanent." That is the problem with PGO, it doesn't product long-term changes to the compiler, the "learning" is immediately forgotten when given a new piece of code. I am sure you can find some general definition of learning from somewhere that can fit PGO, but from an AI standpoint it isn't learning.

              Comment


              • #27
                Originally posted by TheBlackCat View Post
                None of this is relevant to my point. I listed some specific problems that I think have prevented visual programming from becoming mainstream and will continue to do so. Transparency and animation will not correct those issues (and, actually, labview does have optional animations). If you have some approach that could correct for those issues, I would love to hear it. But as long as those issue exist, I don't think visual programming will extend much beyond the niche it already has.
                as you claim writing a program visually is not the problem, clearly visualizing a written program is
                so tools are needed that help visualize it

                like, for example, you hold alt and click on a node
                everything except that node goes to 10% transparency
                you get a menu on the right top side of the screen that has checks for "level of view", "takes from", "uses xx" and idk
                where level of view would be how deep, from that node, to choose connected nodes and make them opaque
                also connections would be thickened and colored differently (depending also on their type)

                if you press ctrl and click and hold on a node then all nodes that feed to this one would be lined up on the left
                (with an animation so you know where they are in the mess when you release)

                more advanced ways of visualizing would need the GUI to be aware of code
                like where you have in programming text editors today where they color text one way if it in braces,
                the node based gui would color a node or a line based on what it represents and how it interacts with connected nodes
                also visualizing entry and exit points depending on types and idk, that i see existing languages do already

                and such
                i only seen some of this kind of representation in sf (maybe in some obscure software, can't remember)
                anyway, a fun thing to think about

                Comment


                • #28
                  also collapsing/expanding parts of it
                  where the grid in the background would bend to represent the amount of code collapsed

                  Comment


                  • #29
                    Originally posted by gens View Post
                    like, for example, you hold alt and click on a node
                    everything except that node goes to 10% transparency
                    you get a menu on the right top side of the screen that has checks for "level of view", "takes from", "uses xx" and idk
                    where level of view would be how deep, from that node, to choose connected nodes and make them opaque
                    also connections would be thickened and colored differently (depending also on their type)

                    if you press ctrl and click and hold on a node then all nodes that feed to this one would be lined up on the left
                    (with an animation so you know where they are in the mess when you release)

                    more advanced ways of visualizing would need the GUI to be aware of code
                    like where you have in programming text editors today where they color text one way if it in braces,
                    the node based gui would color a node or a line based on what it represents and how it interacts with connected nodes
                    also visualizing entry and exit points depending on types and idk, that i see existing languages do already

                    and such
                    i only seen some of this kind of representation in sf (maybe in some obscure software, can't remember)
                    anyway, a fun thing to think about
                    You should look at labview, it already has all of these features and has for at least a decade. Graphically they aren't actually all that difficult to do. But it still doesn't really solve the fundamental problems:

                    1. It won't help when the nodes are on another page. Pulling the node to the current page won't help, because their connections will be on another page.
                    2. It won't help when there are a lot of connections. Humans just get lost when facing a large bundle of similar lines. Looking at individual inputs or outputs won't really help because that prevents you from getting an overview.
                    3. Untangling lines in a meaningful way is an inherently difficult problem for computers, since the computer can't have knowledge of the overall purpose the human has for the program, and that purpose is what determines what should be grouped together. Labview untangle simple layouts automatically, but it chokes on complex ones (or rather it can't read your mind to determine what layout would be useful to you), and some layouts fundamentally cannot be untangled.
                    4. It will never be as easy for humans to keep track of many lines as it will be for them to keep track of many names.

                    But even if you could fix all these problems, the basic issue still stands: all of these things are easier and more natural to do in a text-based language. All the features you list are already available in any half-decent text-based IDE, and in general produces output that is easier for the human mind to process.

                    To give another example of why this is hard: have you ever played those puzzles in a book where you have a tangle of lines and you have to find out which one leads to some target? Those are puzzles in the first place because that sort of thing is hard for humans to do. On the other hand, have you ever read a choose-your-own-adventure book? This is basically the same thing, only more complicated because lines can branch and join. But choose-your-own-adventure books are not considered puzzles, because this sort of thing is easy for humans to do when it is in this sort of text format. For humans, following a complicated text-based network is easier than follow a complex graphical network.

                    Another example: streets have names because humans are better at keeping track of names in their heads than complex branching networks.

                    For very simple networks, block diagrams can be useful. But when structures start getting complicated, humans are much better at dealing with text, at least when dealing with the sort of step-by-step process that computers are suited for.

                    Comment


                    • #30
                      Originally posted by TheBlackCat View Post
                      I know what a visual programming language is, I am very proficient in three of them. But I only use them when I have no alternative (generally because they are only approach a vendor provides to interface with their proprietary hardware).

                      There are several problems I have with visual programming languages (all of these, of course, are my opinion):

                      ....
                      Interesting. Which languages are those besides LabVIEW?
                      What things (if any) do you think current visual programming languages do better than text-based languages?

                      Comment

                      Working...
                      X