Announcement

Collapse
No announcement yet.

Red Hat Enterprise Linux 6.5 Preps New Capabilities

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by garegin View Post
    I agree. Which makes having a good GUI pointless. I don't think you should run a GUI on a webserver. As a matter of fact the kernel.org was hacked through a vulnerability in x.org, right?
    They didn't publish any such details so I don't think you can make that claim. For the people arguing about servers, you forget that RHEL is used by several major organizations including Dreamworks and so forth for thousands of workstations. Some servers certainly need a GUI because the admin panel etc are graphical. Ex: Oracle stuff

    Comment


    • #17
      Originally posted by RahulSundaram View Post
      They didn't publish any such details so I don't think you can make that claim. For the people arguing about servers, you forget that RHEL is used by several major organizations including Dreamworks and so forth for thousands of workstations. Some servers certainly need a GUI because the admin panel etc are graphical. Ex: Oracle stuff
      Amazon loves RHEL too

      Comment


      • #18
        Originally posted by tomato View Post
        GUI configs in RHEL don't matter for a very simple reason: RHEL is not a distro for SMBs with Steve, the accounting guy, doing the administration, it's for enterprises that have dedicated, knowledgeable people that do the administration. And guess what? People like that don't care for GUI configuration tools. GUI configuration tools don't scale for dozens or hundreds of servers. CLI does, and CLI does that in many different ways, and you can pick which one you want to use.
        I take it you're not aware of RHs movement into infrastucture (big data, cloud management, paas). For those building really good (web) guis is part of the process. Is expected admins will be using those.
        For the more traditional desktop/server admin they are, increasingly, having to compete with windows since the old unix market is coming to resemble an increasingly dessicated, say, fruit. Those windows admins make heavy use of guis for many, many tasks. Of course they have the wonderful powershell to let them handle the cases not covered by the guis, but the vast majority of work they'll be doing will have a gui.

        Originally posted by garegin View Post
        I agree. Which makes having a good GUI pointless. I don't think you should run a GUI on a webserver. As a matter of fact the kernel.org was hacked through a vulnerability in x.org, right?
        You say that but above you were talking about running X on a server

        Comment


        • #19
          Originally posted by MartinN View Post
          Exactly... and on Unix (Linux rather), an 'appealing' alternative is to install things like ajax-enabled/wannabe-rich-GUIs like Webmin, or cPanel for administering the box (cpanel isn't even free, AFAIK)... but not a native GUI, like what you see in OS X's Preferences or Gnome's/KDE's attempt at supplying easy to use configuration GUIs...

          garegin nailed it - had it not been for this ingrained, entrenched elitist attitude of many Unix CLI kings..... Linux, or even UNIX in general would've been a different beast today. Jobs seized on this - putting on a nice GUI (NeXT) on a UNIX-based OS, and look where that got Apple... whereas Linux stayed in its server niche for the longest time.

          Thankfully, this is all past-based, and the momentum to bring a viable linux gui without the crufty 30+ year old X legacy that can benefit corporations and end-users/developer alike has picked up more steam in the last 2 years or so than in the last 5-10 years prior. Intel, RedHat, gnome, kde/qt... there's definite consensus and motivation of where things on the desktop (or mobile) are headed, and even some positive reinforcement in the shape of Mir and Shuttleworth labeling the Wayland effort "a repeat of past (X) mistakes"...

          The lack of a solid GUI for Linux has been (and still currently is, though nowhere near to the same extent) the sole, major demoralizing factor for wider Linux adoption, beyond the server room.
          This is revisionist history. Your analysis gives the impression that NeXTStep and Linux-based OS'es were developed largely in parallel, and that Steve Jobs's emphasis on graphical experience caused NeXTStep to pull ahead of Linux-based OS'es. This is incorrect in two ways:
          •NeXTStep predates the Linux kernel by about three years. The last version of NeXTStep was released before Linux-based OS'es reached a functionality level that could make fair comparisons possible.
          •NeXTStep and NeXT Computer were market failures. Focusing on the graphical experience was not some silver bullet that bred success and desktop adoption.

          I think an argument could be made that Apple's brand recognition and marketing competence had more to do with OS X's success than OS X's NeXTStep base did. However, Apple continued to struggle to sell computers until the iPod and iPhone made Apple trendy again. I have read in the past that Apple was experiencing steady decline in desktop computer sales before they released the iPhone. I wish I could find hard numbers to back this up, but unfortunately my Google skills are not up to par. The oldest stats I found Googling put OS X at about 3% in 2007 (the release year of the first iPhone). Keep in mind that OS X was released in 2001. 3% by 2007 is not exactly a sign of great, metioric rise. Since then, however, pretty much all usage share stats indicate consistent growth in OS X adoption. You can draw whatever conclusion you want to from that. The one that I draw is that iPhone and other iOS devices had more to do with OS X becoming appealing than OS X itself did. The iPhone was a success, and Apple's other products got to tag along for the ride.

          I agree that a polished and intuitive graphical experience is very important to driving adoption of Linux-based operating systems. However, I believe that there are many other factors that are also very important and should not be overlooked. Pointing to OS X as an example of an operating system that was successful primarly because it had a better graphical experience is factually wrong and misleading. There is no single simple solution that will cause Linux-based OS'es to break into the desktop mainstream (except, maybe, a large and very expensive marketing campaign), and arguing that there is does not help the situation.
          Last edited by Serge; 10-09-2013, 05:31 PM. Reason: end user experience -> graphical experience

          Comment


          • #20
            Originally posted by Serge View Post
            This is revisionist history. Your analysis gives the impression that NeXTStep and Linux-based OS'es were developed largely in parallel, and that Steve Jobs's emphasis on graphical experience caused NeXTStep to pull ahead of Linux-based OS'es. This is incorrect in two ways:
            •NeXTStep predates the Linux kernel by about three years. The last version of NeXTStep was released before Linux-based OS'es reached a functionality level that could make fair comparisons possible.
            •NeXTStep and NeXT Computer were market failures. Focusing on the graphical experience was not some silver bullet that bred success and desktop adoption.

            I think an argument could be made that Apple's brand recognition and marketing competence had more to do with OS X's success than OS X's NeXTStep base did. However, Apple continued to struggle to sell computers until the iPod and iPhone made Apple trendy again. I have read in the past that Apple was experiencing steady decline in desktop computer sales before they released the iPhone. I wish I could find hard numbers to back this up, but unfortunately my Google skills are not up to par. The oldest stats I found Googling put OS X at about 3% in 2007 (the release year of the first iPhone). Keep in mind that OS X was released in 2001. 3% by 2007 is not exactly a sign of great, metioric rise. Since then, however, pretty much all usage share stats indicate consistent growth in OS X adoption. You can draw whatever conclusion you want to from that. The one that I draw is that iPhone and other iOS devices had more to do with OS X becoming appealing than OS X itself did. The iPhone was a success, and Apple's other products got to tag along for the ride.

            I agree that a polished and intuitive graphical experience is very important to driving adoption of Linux-based operating systems. However, I believe that there are many other factors that are also very important and should not be overlooked. Pointing to OS X as an example of an operating system that was successful primarly because it had a better graphical experience is factually wrong and misleading. There is no single simple solution that will cause Linux-based OS'es to break into the desktop mainstream (except, maybe, a large and very expensive marketing campaign), and arguing that there is does not help the situation.
            We need more than marketing. We need standards. Alsa needs to be fixed. We need a solution to configuration and management that doesn't include bash scripts or, in general, opening a cli. Wayland will help termendously with graphical stability, but we are technically lacking in some areas and no amount of marketing will fix that.

            Comment


            • #21
              Originally posted by liam View Post
              We need more than marketing. We need standards. Alsa needs to be fixed. We need a solution to configuration and management that doesn't include bash scripts or, in general, opening a cli. Wayland will help termendously with graphical stability, but we are technically lacking in some areas and no amount of marketing will fix that.
              I do believe that marketing would do far more for increasing usage share of Linux-based and *BSD OS'es than any one other thing, but the point I was trying to get across is that we do need many things, not just any one thing, I disagree with you on specifics, however.

              "We need standards."
              (Sorry if I am strawmanning you on this one; I wasn't 100% sure what you meant by "we need standards", so I took a guess.)

              Do you mean we need to increase compatibility across multiple Linux-based OSes ("distributions") by way of standards? Well, we have LSB and that's been a failure. LSB mandates standards that are in conflict with what distribution developers feel is the best approach. LSB was dead on arrival. Experienced standards bodies have figured out - and developed policies and operating guidelines in accordance with - that one does not simply "mandate" standards, period. Something must first become a de facto standard before it can hope to have success as a de jure standard.

              LSB tried to force consensus where consensus had failed to develop naturally. The various distributions did things the way they did because they felt that their approach was the best. For example, Debian developers and developers of distros based on Debian felt that the .deb packaging format had advantages over the .rpm format. If they did not feel this way, they would have switched to .rpm on their own, without being prompted by LSB. And on the other hand, requiring support for BOTH sides of an incompatibility, as was done with Gtk+ and Qt, just leads to bloat.

              I think that cross-distribution compatibility is very important, but I do not believe that this problem can be solved very easily. Standards by themselves will always fail to gain acceptance and adoption as long as the underlying causes of the incompatibilities are not addressed. Take, for example, FFmpeg vs Libav. Those are two projects with deeply ingrained animosity towards each other, and growing incompatibility between the two leads end users and downstream developers not directly involved with either project to take sides as well. Backing one with a standard and ignoring the other will result in the standard failing and will not have a strong impact on adoption of one over the other.

              Although the FFmpeg-Libav conflict is caused by engineers, there are other artificial incompatibilities* that are created by managers (hint: http://translate.google.com/#en/ru/peace). Driving more open source projects towards corporate control will not fix this problem. Instead, we'll trade freedom and incompatibilities born of passion for vendor lock-in and incompatibilities born of marketing / sales needs ("market segmentation", "product differentiation").

              *My appologies to any FFmpeg or Libav developers. I've read up on the projects and realize that the FFmpeg and Libav incompatibilities are not entirely artificial.

              "Alsa needs to be fixed."
              I'm not qualified to write about whether it's ALSA or something else in our audio stack that needs to be fixed, but I don't believe it's all that important for the typical desktop user. I'm not saying that typical users don't care about audio, I'm saying that for typical users our audio is "good enough". My direct experience is that audio in Linux-based OS'es still sucks for professional use, but I think that doesn't impact typical users that much.

              "We need a solution to configuration and management that doesn't include bash scripts or, in general, opening a cli."
              Yes, definitely. I think continued improvement in auto-detection and auto-configuration is the best approach, although improvement in and wider adoption of graphical administrative tools like Yast and MCC is also important.

              ***
              Ultimately, I agree with you that there are some areas where Linux-based OS'es are lacking technically. My personal vote goes to lack of native ports of destination programs. Games, creative tools, tax and accounting software, that kind of stuff. However, I also feel that there are some areas where Linux-based and *BSD OS'es have significant advantages, or else we wouldn't be using Linux-based or *BSD OS'es in the first place. A good marketing campaign would focus on these advantages. I think that a very appealing sales pitch can be made for Linux-based OS'es ("reliable", "proven track record", "puts YOU in control", "respects your freedom and right to privacy", etc. etc.), and I think that such a marketing campaign would do magnitudes more for increased usage share than fixing technical disadvantages would. I think that without marketing, Linux and *BSD will always compete with a handicap.

              Comment


              • #22
                Ahh, shit. That was way longer than I intended it to be. Sorry for the wall of text.

                Comment


                • #23
                  Serge, I'm not gonna piece-meal quote everything you said, because that'd be a massive post. But I want to just hit on a few points...


                  Linux Audio sucks for Professionals. Out-of-the-box...YES. Pulse isn't -designed- for "Audio Pros" (though the realtime extension may fix / help fix that) its designed for the everyday user. If you want Pro audio on linux you use a realtime kernel with JACK(2), in which case properly tuned you can get sub-4ms latency, or so I've been told.

                  ~~~~~

                  Configuration thats not based around bash scripts... Yes. Better configuration options is a big plus, but they can be command-line based (and then just GUI frontends for the commands is fine). The one that comes to mind.... NetworkManager. `nmcli` is actually a very fine tool if you need network management at the command line level and I've used it a few times when the frontend would fail to connect for one reason or another. its very clear (eg: literal, and not archaic) commands, and if you pass the -p flag to it, it gives very nicely formatted output.

                  I wish there were easier ways to do some other things (such as Samba. Why can't the File Share KCM in KDE Settings offer more options than just setting username and password for OTHER Shares?? What if I want to MAKE a share?)

                  ~~~~~

                  Standards will probably come down to NOT being LSB-style things, and instead be things like: systemd suite, NetworkManager, Avahi, PackageKit, and the likes. Just little assumptions that can be made about HOW things work and what DOES what and what is WHERE. Perfection example being some config files:

                  Upstream systemd says that your hostname gets set in "/etc/hostname" no in /etc/conf or /etc/sysctl/ or anywhere else. Just a plain-text, one line file called /etc/hostname. Is it simple? Yup. Is it stupid we had to mandate it? Yuuuuuuup. Did we HAVE to mandate it? You bet. Everyone had it somewhere different.

                  What about distro info? /etc/os-release . Plain-text INI style config file, not /etc/fedora-release or /etc/rh-release or /etc/debian-info or whatever else we all had. Just /etc/os-release.

                  Module options? /etc/modprobe.d/ and within that its recommended you do ($MODULE_NAME).conf or ($DESCRIPTIVE_NAME).conf

                  Extra modules to load? /etc/modules-load.d/ plain text file. Done.

                  No where else. Stop fscking with it.

                  Is it an extreme amount of standardization? Nope, but its SOMETHING and it gets us going in the right direction.

                  Comment


                  • #24
                    Originally posted by Serge View Post
                    "Alsa needs to be fixed."
                    I'm not qualified to write about whether it's ALSA or something else in our audio stack that needs to be fixed, but I don't believe it's all that important for the typical desktop user. I'm not saying that typical users don't care about audio, I'm saying that for typical users our audio is "good enough". My direct experience is that audio in Linux-based OS'es still sucks for professional use, but I think that doesn't impact typical users that much.
                    No, Linux audio doesn't suck for professional use... PulseAudio does though - the latencies are simply unforgivable, and if you look at any serious audio software on Linux, you'll notice that none of them use PulseAudio by default - some offer it as an option, but even then they warn the user that it's not something they should use if they want good latencies and reliability.

                    The very best Audio software (eg. Ardour) do not support Pulseaudio at all, and instead rely mostly on Jack.

                    Linux audio doesn't suck, there's some very good audio software on Linux. It's just that setting up a good audio setup on Linux takes some work, and has a bit of a learning curve.

                    Comment


                    • #25
                      Originally posted by Ericg View Post
                      ...
                      Standards will probably come down to NOT being LSB-style things, and instead be things like: systemd suite, NetworkManager, Avahi, PackageKit, and the likes. Just little assumptions that can be made about HOW things work and what DOES what and what is WHERE. Perfection example being some config files:

                      Upstream systemd says that your hostname gets set in "/etc/hostname" no in /etc/conf or /etc/sysctl/ or anywhere else. Just a plain-text, one line file called /etc/hostname. Is it simple? Yup. Is it stupid we had to mandate it? Yuuuuuuup. Did we HAVE to mandate it? You bet. Everyone had it somewhere different.
                      ...
                      The problem with that is, systemd has significant obstacles in the way of it seeing sufficiently wide adoption to make assumptions about its availability. Debian has been my main distro since 2009. I used to get frustrated with Debian's drawbacks and I would try out other distros to find one that I like more than Debian. Although I eventually realized that Debian was indeed the best distro for my preferences, I also got to interact with systemd frequently when exploring other distros, and in that time I reached the conclusion that systemd is excellent software and that I would very much like to see Debian adopt it as the default service manager. This, however, is not going to happen any time soon - probably not for several years, in fact.

                      The reason why I believe this is because, having read the criticisms that some Debian users and developers express of systemd, I realize that systemd does in fact have several legitimate drawbacks. Furthermore, many of these drawbacks are the result of deliberate design choices and are very unlikely to be addressed by upstream. For example, it is a deliberate design choice that causes systemd to not be compatible with non-Linux kernels. Thus, systemd is not compatible with Debian GNU/kFreeBSD, and this will not change unless the design philosophy behind systemd changes or unless the FreeBSD project decides to extend their kernel to support systemd, neither of which is likely to happen.

                      The reason why this is important is because 60% of the distros listed on distrowatch.com are either Debian or Ubuntu derivatives. In other words, at least 60% of the distros do not deploy systemd as the default service manager / init system, and given the conditions as they stand now, it is unlikely that this will change in the near future. This kind of adoption is simply too low to make assumptions about systemd's availability. It will not become a de facto standard, and a de jure standard mandating systemd availability will fail the same way that LSB failed.

                      Comment


                      • #26
                        Originally posted by Serge View Post
                        The reason why this is important is because 60% of the distros listed on distrowatch.com are either Debian or Ubuntu derivatives. In other words, at least 60% of the distros do not deploy systemd as the default service manager / init system, and given the conditions as they stand now, it is unlikely that this will change in the near future. This kind of adoption is simply too low to make assumptions about systemd's availability. It will not become a de facto standard, and a de jure standard mandating systemd availability will fail the same way that LSB failed.
                        But if we start getting Tizen-based ultrabooks at some point, that'd probably change the game quite a bit... IIRC Tizen uses systemd.

                        For that matter, Ubuntu-based distros are likely to decrease when Ubuntu starts using Mir... if those distros rebase on Debian or some other Debian-based distro, that would still mean they wouldn't use systemd, but some of them might choose to rebase on something else...

                        Comment


                        • #27
                          Originally posted by Honton View Post
                          Are you saying Debian can't adopt systemd before Ubuntu gets Mir in a desktop release? I find that hard to believe since systemd is already in testing and sees much more activity than the competition.
                          Debian would have to give up its ability to run on alternative kernels, which so far they have refused to. systemd purposefully uses features the Linux kernel has (regardless of whether or not they exist elsewhere) in order to give the best experience possible.

                          Comment


                          • #28
                            Originally posted by Honton View Post
                            Are you saying Debian can't adopt systemd before Ubuntu gets Mir in a desktop release? I find that hard to believe since systemd is already in testing and sees much more activity than the competition.
                            That systemd is in a Debian repository does not mean that it is the default init system. And that it sees more activity than an already established and very well working solution does not change that SystemV is (and will remain for some time) the default init system.

                            Comment


                            • #29
                              Originally posted by Ericg View Post
                              Debian would have to give up its ability to run on alternative kernels, which so far they have refused to. systemd purposefully uses features the Linux kernel has (regardless of whether or not they exist elsewhere) in order to give the best experience possible.
                              They don't have to give up on alternative kernels. They just have to run alternative init systems depending on which variant you are running. Packages can ship both init scripts and systemd unit files and systemd will prioritize the unit files while other init systems will just use the init scripts. Automatically generating init scripts are also being investigated. The best solution is to implement the same interfaces across init systems.

                              Comment


                              • #30
                                Originally posted by Serge View Post
                                "We need a solution to configuration and management that doesn't include bash scripts or, in general, opening a cli."
                                Yes, definitely. I think continued improvement in auto-detection and auto-configuration is the best approach, although improvement in and wider adoption of graphical administrative tools like Yast and MCC is also important.
                                Pardon me, I don't disagree with what you say, just in the way it seems to be getting commonly implemented. So please let me restate that every so slightly modified...

                                "We need a solution to configuration and management that doesn't include bash scripts or, in general, opening a cli, but does not forbid using bash scripts or the cli, either."

                                Back in the day, I've had to do both on order to get my systems running properly, even booting at all. I certainly appreciate all of the auto-configuring gizmos that make my system easier to run and boot, UNTIL they fail to work correctly. Then I just want to do whatever it takes to get running again, and I don't want those same gizmos, sitting in the way, obfuscating the old job I used to be able to readily figure out how to do, or even working against me, undoing my every tweak or fix. Please leave Linux hackable - it seems like there are those trying to turn Linux into Windows. It doesn't have to be that way - we should be able to have both.

                                Comment

                                Working...
                                X