Announcement

Collapse
No announcement yet.

Electron Apps Are Bad, So Now You Can Create Desktop Apps With HTML5 + Golang

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • cybertraveler
    replied
    Originally posted by ssokolow View Post

    I was saying that it may be possible to do something like augmenting the HTML5 <time> element with an attribute that says "If you support this attribute, ignore the contents of the text body of the time element and render the contents of the datetime attribute in the user's local timezone in accordance with their locale settings."

    Something like this, where the localize attribute would invoke the behaviour in supporting browsers:

    <time datetime="machine-readable date as spec'd" localize>fallback text</time>

    "Without JavaScript" doesn't automatically mean "server side" and there is precedent for HTML elements where the text body is intended as a fallback for unsupporting browsers.
    In some ways we already have some features like that in the browser. For instance you can do zoom on mouse hover with CSS:

    https://www.w3schools.com/howto/howt...zoom_hover.asp

    There is no javascript required and thus no garbage collector + interpreter/JIT step. It's some (hopefully) well designed Rust or C++ code pre-compiled to machine code, handling the transformation.

    I bet there is a lot of stuff done in the general-purpose JavaScript language that could be replaced with some nice markup language and special elements (like your <time> example).

    Leave a comment:


  • ssokolow
    replied
    Originally posted by cj.wijtmans View Post

    what are you even on about? Server has no clue about client time display settings. I have mine to some ISO standard instead of dutch time format. It doesnt have to be a UNIX timestamp. it could be UTC time format. As long as there is an element that converts it to the clients native format.
    I was saying that it may be possible to do something like augmenting the HTML5 <time> element with an attribute that says "If you support this attribute, ignore the contents of the text body of the time element and render the contents of the datetime attribute in the user's local timezone in accordance with their locale settings."

    Something like this, where the localize attribute would invoke the behaviour in supporting browsers:

    <time datetime="machine-readable date as spec'd" localize>fallback text</time>

    "Without JavaScript" doesn't automatically mean "server side" and there is precedent for HTML elements where the text body is intended as a fallback for unsupporting browsers.
    Last edited by ssokolow; 02-10-2019, 01:53 PM.

    Leave a comment:


  • cj.wijtmans
    replied
    Originally posted by ssokolow View Post

    Accepting "seconds since the epoch" and displaying a timestamp, convrted to the user's local timezone, without JavaScript are orthogonal. The former, you're probably not going to get, since it'd be inconsistent with existing stuff. The latter, I could see a possibility for.
    what are you even on about? Server has no clue about client time display settings. I have mine to some ISO standard instead of dutch time format. It doesnt have to be a UNIX timestamp. it could be UTC time format. As long as there is an element that converts it to the clients native format.

    Leave a comment:


  • cybertraveler
    replied
    Originally posted by caligula View Post
    ...
    Really cool post caligula. Enjoyed reading that. The language story is obviously much bigger than that, but you covered some key parts of it.

    Do you see the language situation improving in the future?

    Some areas which I think are improving:

    WebASM - there's a growing number of languages that you can now compile into WebASM. If the WebASM API is expanded, we may be able to use that as an alternative to Javascript. Not ideal, but an improvement.

    On Apple systems devs are now frequently using Swift. I haven't tried Swift, but it does look far nicer than C++ (which is the closest equivalent to it). Unlike Go, Swift isn't garbage collected, so I think it has potential for being used for system programming and game engine development.

    Go & Rust are offering an alternative to using C & C++ for some use cases.

    Leave a comment:


  • Delgarde
    replied
    Originally posted by cj.wijtmans View Post
    i wish there was a time display element that accepted unix timestamps. that would be great. but instead you would have to output UTC(in case theres no javscript) anbd then use javascript to convert to native time format...
    Unlikely to happen, because as widespread as they are, UNIX timestamps aren't a standard data interchange format... so while javascript Date has some support for them, HTML is unlikely to support anything other than ISO-8601 strings.

    Leave a comment:


  • aksdb
    replied
    Originally posted by mbello View Post
    Delphi and Visual C++/C#/Basic are no longer an option (we now actually care about supporting multiple platforms).
    I actually still use Lazarus/FreePascal when I need to develop GUI. I can't stand C++ and everything else that supports QT is even more horrid. Lazarus is kinda Delphi but with enough abstraction that in the end it uses either WinAPI, QT, GTK or Carbon (I think) for rendering/widget handling. So I get native (truly native) applications, I am still cross platform and most importantly: I have a sane GUI designer (I really really really cannot stand declarative UI development).

    Leave a comment:


  • brrrrttttt
    replied
    Originally posted by trek View Post

    I completely agree with you for small groups, but big giants like google and facebook, that spend much time optimizing their datacenter, they should not allowed to lazily consume customer's electric power
    Yeah, can’t disagree with that. They are more than willing to spend the resources on better software when it’s running on their own metal.

    Leave a comment:


  • trek
    replied
    Originally posted by brrrrttttt View Post
    That's a very one-eyed view. The net benefit to society from enabling a much larger and more diverse group of people to create content is likely far larger than the extra CPU cycles consumed because they couldn't hand-write a dynamic web application in x86, x86_64, ARM, ARM + NEON, etc. assembler (yes, that's the logical conclusion of your "known optimization" argument). Life must be tough when you're unable to make such compromises?
    I completely agree with you for small groups, but big giants like google and facebook, that spend much time optimizing their datacenter, they should not allowed to lazily consume customer's electric power

    Leave a comment:


  • caligula
    replied
    Originally posted by mbello View Post
    Such a ridiculous situation, really. At the beginning, web sucked as each and every interaction required a full page reload. Then came AJAX and web pages started to behave more like the Desktop UI we were all used to. Fast forward 20 years and we went full circle, web technology that struggled so much to finally offer "desktop-gui-like experience" are now being used as desktop-gui technology. But no, it is not a good idea.
    To make HTML5 + CSS + JS behave it takes an incredible amount of code. Code written in inneficient scripting language, with many layers of abstraction and then we come to actually rendering HTML which takes a huge piece of software.
    The biggests sins of HTTP/HTML/CSS/JS:

    1) HTTP/HTML/CSS was originally built for these fully reloaded mostly static pages
    2) JS started as a shitty toy language. It didn't really improve until 2010s
    3) Nowadays we're abusing the whole sw "stack". Especially the most used JS features appeared during the last 5 years.
    4) JS is still 6 to 8 times slower than Java. Java is in some cases 6 times slower than native. So basically all advancements we got from better hardware in 2010s are neutralized with this shitty performance. Maybe even those speedups we got in 2000s.
    5) You can't control the machine that well with JS. You can't do your own double buffering, explicit memory allocation, decide yourself what to draw. The DOM model assumes all writes to DOM must be immediately rendered. It's retarded
    6) JS is dynamically typed and large programs are being ported to JS. This will lead to increasing number of bugs thanks to dismissing all the language research done in 1970-2020.

    But then why are we here now? That speaks mostly to the failure of Desktop GUI technology in evolving with times. Delphi and Visual C++/C#/Basic are no longer an option (we now actually care about supporting multiple platforms). Java never took off (for good reasons). There are some interesting options, like Qt, but C++ is a big reason not to even try it.
    Maybe Flutter will finally fill this void?
    All those technologies failed in different ways:
    - GTK+ - object-oriented programming in C. Now in 2019 most of us know what a lie OOP was. It didn't deliver the properties it promised. OOP isn't that modular, for instance. Now, doing OOP in C is just totally fucked up. The only good thing is, writing bindings is easy. GTK1-2 are horribly outdated. Technically speaking, you want scene graphs, flicker & tear free graphics, smooth animation and video, and other such advanced features. You want to define control layouts with constraints, not pixels. GTK4 might be better, but the market has already leaned towards the browsers. The Qt folks will also continue to disagree with GTK.

    - Qt - interfacing with C++ is horrible, the language is kind of high level, but with the inherent ability to easily shoot you limbs off with a gatling gun. They later developed a high level scripting language for C++ haters. It sort of solves many problems, but on low level you still need to interface with C++. There's also a difference between Qt and KDE apps. Updating Qt 3-> 4 -> 5 has been a major pain in the ass. It took years to port KDE. Not really a proof that the toolkit is productivity oriented in any way. Still, I think Qt is the most advanced toolkit for Linux users.

    - EFL - rasterman doesn't get why void* pointers are bad. It's impossible to imagine any success until you learn the basics of programming languages you're supposed to use. There are threads in dailywtf about this.

    - all the legacy unix toolkits - both, the programming feels clumsy and the technical solutions are badly outdated. They merely exist to support legacy apps and their GUIs.

    - Mono ecosystem - not that bad IMO, but it matured a bit late. Everyone is using something else now. On Mono I'd probably use F#, not C#. But C# is there for the mediocre programmers.

    - Java ecosystem - thanks to years of legal struggles, wasting money on crappy niche Unix systems, the Java GUI toolkits didn't get any updates after Swing. Swing was a horrible clusterfuck where you needed tons of classes, not only for containers, but layouts, panes on top of panes. It was a mess which they cleaned afterwards. They also fucked up JavaFX by first introducing a new clumsy scripting language. It took them years to recover. Only now the toolkit seems ok, but Oracle somehow screwed up the distribution and packaging your modular app is a PITA. The Java language is also constantly underestimating everyone's intelligence. You could use Clojure, but it's slow and less rigid with its type safety. Scala apparently fixes many problems, but the compile time is absolutely horrible, especially with Scalaz.

    - all the other toolkits in other languages - people can't really decide what language to use so they'll stick with C, C++, Python, and JS. So no winners here even if there were good toolkits available.
    Last edited by caligula; 02-11-2019, 01:13 AM.

    Leave a comment:


  • Luke_Wolf
    replied
    Originally posted by mbello View Post
    Such a ridiculous situation, really. At the beginning, web sucked as each and every interaction required a full page reload. Then came AJAX and web pages started to behave more like the Desktop UI we were all used to. Fast forward 20 years and we went full circle, web technology that struggled so much to finally offer "desktop-gui-like experience" are now being used as desktop-gui technology. But no, it is not a good idea.
    To make HTML5 + CSS + JS behave it takes an incredible amount of code. Code written in inneficient scripting language, with many layers of abstraction and then we come to actually rendering HTML which takes a huge piece of software.

    But then why are we here now? That speaks mostly to the failure of Desktop GUI technology in evolving with times. Delphi and Visual C++/C#/Basic are no longer an option (we now actually care about supporting multiple platforms). Java never took off (for good reasons). There are some interesting options, like Qt, but C++ is a big reason not to even try it.
    Maybe Flutter will finally fill this void?
    Eh... no. There was no failure of desktop GUI technology, What happened was that the web became a big deal and the web technologies happen to be very forgiving which meant that a bunch of low skilled jackoffs jumped on board and were afraid to move to other technologies, because they could focus all of their energy on The Web and make some decent money doing that. A few of these jackoffs decided that they wanted to try their hand at application development and due to the whims of the market some of these became accepted, but even today the vast vast majority of applications that a user will use are still using standard Desktop GUI technologies rather than crap like Electron, and on mobile developers have largely abandoned mobile websites for native applications, which use technologies that are in all practical measures equivalent to their desktop counterparts. A WPF or Qt/QML developer would be right at home with Android development and vice versa.

    Leave a comment:

Working...
X