Announcement

Collapse
No announcement yet.

Google Rolls Out OnHub Router, Powered By Gentoo Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by SystemCrasher View Post
    Maybe because OpenWRT targets small devices with 32-64Mb RAM and something like 8Mb Flash ROM? Its kinda awkward on more powerful, more PC-like devices. E.g. you do not need crippled and minimized "ps" or "top" from busybox on machine running 1Gb RAM. But you'll be better with busybox on machine with 32Mb RAM and 8Mb flash for everything. Gentoo simply wouldn't fit it.

    Also, OpenWRT has got very questonable releases policy. While it ensures device stays operational, there are no security fixes. And granted that stuff like WPA Supplicant has got critical bugs allowing remote code execution in its past, I would view it as some shortcoming of openwrt. Though I should admit any decision is a tradeoff.
    OpenWRT doesn't target small devices with 32-64 MB of RAM and 8 MB of flash ROM. That's pretty much the lowest specs it runs these days. Normal $100 - $150 routers have 128-256 MB of RAM and 16-64 MB of flash. Even then the OpenWRT installation has disabled many features. I'm just saying that it really scales well past those specs. The init system isn't as flexible as systemd and it lacks desktop packages, but it also works as a lightweight server. If it's missing a package, just build it yourself.

    Comment


    • #42
      Originally posted by droidhacker View Post

      Hahaha, paranoia much there? You do realize that google need not have root access for the thing to update automatically.... as long as *IT* has root access TO ITSELF. Which by definition, it must. I can pretty much guarantee that nobody will be logging into your router over ssh and running stuff on it.
      I can't even update regular software without providing root access to the machine and you're telling me a firmware update can be done with less privileges?

      Comment


      • #43
        Originally posted by bug77 View Post

        I can't even update regular software without providing root access to the machine and you're telling me a firmware update can be done with less privileges?
        No, what droidhacker is pointing out is that there is a semantic difference between "having root access" and "providing OS updates". Technically it's kind of true that if your operating system ever downloads and installs an update from a remote server then "they have root". But if you point out that Microsoft "has root" on every Windows server, or that Ubuntu "has root" on every Ubuntu server, then people get confused, because they think that means there is some kind of secret remote access backdoor.

        Comment


        • #44
          Originally posted by schmalzler View Post
          Your POV (and the way you present it) is a slap in the face of all the gentoo devs that really try hard to make updates as smooth as possible.
          It probably wasn't meant as a dig at the devs, just a realistic interpretation of Gentoo. One of the problems with Gentoo was that updates did sometimes break a system, and when the breakage was bad it required hours of downtime while recompiling e.g. the admin sets some USE flags that end up generating mutually incompatible packages, and there is no way for portage to calculate that in advance - it just fails at compile time. The Chrome OS approach of using Gentoo to build a new root fs for each update and then downloading an update delta and installing it in an alternate partition is much more reliable and better for most users - updates are atomic, so the system is never in a half-updating or broken state, and if it turns out that an update is bad, the boot can always automatically fallback to a known good one.

          Comment


          • #45
            Originally posted by chrisb View Post
            e.g. the admin sets some USE flags that end up generating mutually incompatible packages, and there is no way for portage to calculate that in advance - it just fails at compile time.
            That's quite rare. I encounter that maybe one or two times a year, and only on really obscure packages or unusual corner cases. And even then, they get fixed quite quickly after filing bug reports. No different that the occasional package collision that happens with binary distros.

            Besides, if you are working with a productions environment, you have a designated build platform where issues from changes in packages all get worked out before pushing the package builds out to production nodes.

            Comment


            • #46
              Originally posted by FishB8 View Post

              That's quite rare. I encounter that maybe one or two times a year, and only on really obscure packages or unusual corner cases. And even then, they get fixed quite quickly after filing bug reports. No different that the occasional package collision that happens with binary distros.

              Besides, if you are working with a productions environment, you have a designated build platform where issues from changes in packages all get worked out before pushing the package builds out to production nodes.
              Well, in that case you actually have a similar setup that I do, except I push filesystem images after they've been thoroughly tested. My situation is that I support workstations running on thin clients. It works easier for me because I don't have worry about any of that. All of actual workstations only have access to the VPN so local security comes down to physical access.

              I probably use more bandwidth then you do when I push out updates, but the only thing I have to do is push and reboot, which is all scripted.

              EDIT: The terminal servers that provide the desktop is running windows 2008.

              EDIT: The biggest problem I have is the workstations themselves. Across the network, there is a pretty wide variety of hardware. So I actually have a number of /boot filesystems made for different workstations. That was the hardest part to figure out, but that doesn't get updated unless I'm having specific problems that need fixed.
              Last edited by duby229; 20 August 2015, 03:23 PM.

              Comment


              • #47
                Originally posted by FishB8 View Post
                Besides, if you are working with a productions environment, you have a designated build platform where issues from changes in packages all get worked out before pushing the package builds out to production nodes.
                That's fine and how it should be. But especially among Debian admins I noticed that they often have a "fire-and-forget" approach to updates, running apt-get upgrade in cron. Such a thing would not work on Gentoo or other rolling release distros.

                Originally posted by duby229 View Post
                Well, in that case you actually have a similar setup that I do, except I push filesystem images after they've been thoroughly tested. My situation is that I support workstations running on thin clients.
                Yeah but you cannot modify the filesystem while the normal operating system is running. This means that you'll need to reboot, which is annoying for a client system and pretty bad for a server. Smarter solutions I have encountered do installation of new packages on a test system and then rsync to the production system.

                Comment


                • #48
                  Originally posted by chithanh View Post

                  Yeah but you cannot modify the filesystem while the normal operating system is running. This means that you'll need to reboot, which is annoying for a client system and pretty bad for a server. Smarter solutions I have encountered do installation of new packages on a test system and then rsync to the production system.
                  At minimum you still have to restart X, so I never saw a problem with rebooting. I also have down time at night most places, so it's probably easier for me. But even if you don't have regular down time, there should be planned down times.

                  Comment


                  • #49
                    As long as it's 'open' to some degree I'll consider it for my next router. Last thing I want is to be locked to device that ends up with us finding out it spies on/tracks us one way or the other.

                    I would just roll my own router but I like having a small power efficient device and good luck finding an 802.11ac card that's 3x3 and Linux supported. The best I could find is a 2x2 Intel card (thank god for Intel's stable Linux support).

                    Comment


                    • #50
                      Originally posted by MNKyDeth View Post
                      This might be a bit off topic but...


                      Computer enthusiasts, Linux, windows or mac.... Will most likely go through computer hardware faster than a normal person, imo. Like me I try to reuse any older equipment in devices. So... I pass hardware down as it gets upgraded to the different devices in my house. Main computer -> (sometimes server) -> HTPC -> test server -> router.

                      By doing this I use PFsense. Grats to google for entering the router end of things but.... Like someone previously posted too many products out there already. Pfsense is just simply amazing and with all my old hardware I couldn't ask for a better software router/firewall.
                      Now... I havn't found any router out there that can compete with pfsense and I doubt googles will either.

                      My 2c
                      I've thought about doing this but haven't yet for a few reasons.
                      One, antennae. onhub has two 3x3 antennae, plus one for "congestion control" (not sure how that works---is that part of the ac standard?). I'd also like extra ethernet ports (I've several consoles and the connection in my building is far more reliable with wired). Two, I don't trust my knowledge of networking security enough to build this all from "scratch". ddwrt, or whatever its called, is complicated enough.
                      I'd love to actually do this b/c it'd be a great learning experience, but the routers are just cheaper, and better than what I've been able to come across.
                      Now, a dedicated firewall box is fine. That's a great way to repurpose an old-ish system (though you obviously can't go too crazy or you'll rapidly overwhelm the cpu).

                      Here's the thing: I trust google to look out for their interests. Their interests are to keep the data they've collected about my profile (though NOT connected with my name) to themselves. I don't recall ever hearing about a google data breach involving their customers identities (I imagine its happened, but I simply don't recall it). If the fbi/nsa/saucer people want to snoop on me, in particular, there's nothing I can do about it. Proxies and tor won't save you

                      Comment

                      Working...
                      X