Announcement

Collapse
No announcement yet.

The Free Software Foundation Endorses First Router In 3 Years - But It's 10/100 + 802.11n WiFi

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by pal666 View Post
    all non-entry level routers are higher than n. you probably will get more sustained throughput via gigabit ethernet, but surely in good conditions you can see higher wireless speeds at least sometimes. same goes for n vs 100mbit
    Do your research next time. 2.4 GHz performance with 802.11n in the ballpark of 90mbps is reserved for the best routers on the market.

    Originally posted by pal666 View Post
    many things make no sense from some pov but are still being sold. most people do not have available wan bandwidth higher than 100mbit anyway, that's why i said it is acceptable for them
    I don't remember whether the median Internet connection speed surpassed 100mbps like a year ago, at least in Europe. People keep getting faster and faster connections. I'd call gigabit a standard these days if you live somewhere with fiber.

    Comment


    • #32
      Its rather simple: all .ac radios I know require firmware. And of course it would be murky non-free blob doing hell knows what. GNU mission was never about pleasing dumbass consumers who's wish is just consume more, more and more - ignoring all the consequences, be it global warming, planet pollution, giving up freedoms or something. Oh, yes, there is no such thing as free lunch. And even if you paid for your lunch, these days it means very little and often wouldn't really prevent getting treated like mouse caught in mousetrap.

      Additionally, I have to admit .ac is short-ranged. Even .n would barely work in large house, falling off to lowest modulations (or even ,g rates) on 2.4GHz. And these 5GHz haves even worse penetration compared to .n. The higher freq, the more challenging it to gets on its own, reducing power transmitted, increasing absorbtion by obstacles and so on. And then energy spread by rather thin layer over wide spectrum to keep speed fast. It makes things very fragile and short ranged. So it works fine if you sit on your router, but 2 rooms away from it it could look like entirely different story, especially if one looks on wireless stats, measures actual transfers speeds and so on.

      Even more funny, virtually no dumbass customers capable of buildinng sane "core network" and roamable APs to compensate for all that . At most there're expensive "professional" solutions to do that, but it clearly overkill for typical house. So this "faster, faster, faster" race leads to "worse and worse and worse coverage". Some vendors already dare to violate regulatory by transmitting way more power than allowed by regulatory, to ensure it at least works anyhow for their customers, but it kinda foul play and cheating, to begin with.

      Ahh and how it works? You get stock firmware doing like 24-27dBm (to get at least anyhow sane range), flash OpenWRT to get rid of nasty spying/vendorlocked blob, it dwindles down to shy 20dBm, coverage goes nowhere ... you suddenly figure out half of channels disabled for no apparent reason, and you either mumble "opensource sucks" and stick with factory blobs and all their downsides or resort to rampant hacking to re-enable HW capabilities stock firmware had out of the box. Like if this folly wasn't already enough, FCC and somesuch would get jealous and demands signatures or something. To ensure you either enjoy treacherous blob or get shitty HW performance if you try to escape from jail.

      Whatever, do you ppl really think it should work like that? That's where FSF eventually gets point... and if modern HW more and more turns into treacherous, backdoored, malicious or misbehaving shit, it hardly FSF's fault. Laughing on FSF on these grounds is utterly dumb to say the least.

      Comment


      • #33
        Originally posted by DoMiNeLa10 View Post
        I don't remember whether the median Internet connection speed surpassed 100mbps like a year ago, at least in Europe.
        Still, gigabit switch can be nice addition for LAN (computer to computer) transfers. While being relatively niche use, ability to transfer files between your computers 10x faster wouldn't hurt, even if it doesn't really speeds up Internet connection - beyond "median connection speed" most servers just would not serve you at gigabit speeds - usually these have plenty other clients to chew on except for some case where server capacity greatly exceeds demand. This means idle site/service on dedicated server & connection. This kind of fun is rather expensive/inefficient, so it isn't widespread scenario. Furthermore, this particular router is "mini" so it doesn't really meant to do LAN to LAN anyway, so 100Mbps don't look really limiting or so.
        Last edited by SystemCrasher; 01 October 2019, 04:06 PM.

        Comment


        • #34
          Originally posted by SystemCrasher View Post
          most servers just would not serve you at gigabit speeds
          It depends on the site, YouTube tends tot cap out at around 280mbps for me, and Steam seems to be able to handle any amount of bandwith I throw at it, so I assume they'd serve 10gbps to people who have such connections. There are mainstream uses where high bandwidth is useful, and there's always BitTorrent. Microsoft takes advantage of people's connections as well to send updates among devices.

          I don't think there's enough bandwidth, as it's quite easy to have an use where you'll want more.

          Comment


          • #35
            Originally posted by DoMiNeLa10 View Post
            It depends on the site, YouTube tends tot cap out at around 280mbps for me, and Steam seems to be able to handle any amount of bandwith I throw at it, so I assume they'd serve 10gbps to people who have such connections. There are mainstream uses where high bandwidth is useful, and there's always BitTorrent. Microsoft takes advantage of people's connections as well to send updates among devices.

            I don't think there's enough bandwidth, as it's quite easy to have an use where you'll want more.
            As for torrent... it better to do that over wired connections.
            1) Faster and more reliable - wired takes less processing on router side and isnt sensitive to noise and other systems activity. Router's CPU and RAM are of more concern in this regard I guess. If someone wants like gigabit and at least some traffic processing they likely want full fledged PC chewing on that, even most powerful SBCs would struggle and typical router HW wouldn't really handle it. Except maybe in HW accelerator, that fairly dumb so you wouldn't have advantage from fact is Linux (Linux can do a lot of fun things).

            2) Air is shared medium. If one lives in desert it isn't a problem to hog "air time" indefinitely. However in populated areas it can upset others. High modulation indexes are picky about noise, wide bandwidth of fastest modes ensures it quite likely to become "noise" for someone else, and if this trend goes on, at some point weird things happen. On side note, OpenWRT proven to be quite dumb about "frequency agility", even nightly at most can sniff air at boot-up to select channels, but it gives no crap to move away if there is some interference source comes later. Ironically, many "stock" firmwares much more agile - so they would readily choose channels where openwrt elected to settle. In best case they pike off, but it depends on remote system's will, so it may or may not happen, lol. On side note, I've sent few funny packets to most exceptional hogs in "dense" areas, somehow it could be verrrry persuading.

            As for youtube, youtube and other Google things are qutie interesting. First they got numersous servers across globe, expertise and will to do global-scale CDN-like thing and even managed to get it right. But Google is nearly one company on the planet who managed to get there. Interestingly not all movies are equal. Popular are more widely cached across globe so can come from close server. Less popular things cached much less, so wouldn't download as fast as well.

            I've also observed details and found youtube saves pennies even by unwilling to cache more than maybe ~2 minute of video to buffer. Getting more than that needs you actually watching it, this happens in background anyway. As long as 2-min average exceeds bandwidth of video you don't see problems. At most I can imagine a bit faster prebuffereing or so, but it only makes sense for really short movies.

            Google is one of very few who managed to get youtube behaving very well even on "imperfect" connections and even got "adaptive" DASH reasonably. Should for some reason connection fail to keep up with data rate it would fetch next chunks with lower resolution. It somewhat degrades picture, but transparent process. That shoulnd't normally happen even on best available youtube content on 100Mbps though. Google would not dedicate 100Mbps bandwidth to one client, it costs too much. They do hell a lot to reduce bandwidth use, like improving VP9, making best content only available in VP9, working really hard on AV1 encoder, etc. On side note, only some new HW supports VP9, and doing 100Mbps VP9 in software ... hum... interesting idea. Should be taxing for any cpu.

            Comment


            • #36
              Originally posted by SystemCrasher View Post
              On side note, only some new HW supports VP9, and doing 100Mbps VP9 in software ... hum... interesting idea. Should be taxing for any cpu.
              Unless you use Windows (especially when using a web browser) and fairly new hardware, VP9 decoding/encoding will run in software mode. Even a mid-end mobile core 2 duo is enough to handle 1080p or 720p60. I'm fairly used to watching VP9 at 2160p with my CPU doing all of the work. IIRC, 2160p60 has a target bitrate of 80mbps on YT.

              Comment


              • #37
                I've been doing quite some VP9 encodings and inevitably stumbled on Google's own encoding recommendations (that they probably use themselves).

                If I remember properly, they do not really go for "target bitrate". They prefer more logical thing, called CRF. That is: you request "target quality" level and also cap max bitrate. What's the point? "Easy" sequences/segments would take just as much bits as they really need to achieve requested quality level, undershooting well below of max bitrate specified. So it more or less similar to Q-mode. "Hard" segments would get capped to max bitrate, keeping worst-case bitrate at acceptable level, so it can still be streamed reasonably (Q-mode could give huge bitrates there, it screws up streaming).

                This mode almost perfect for fire-and-forget encodings like what youtube needs for videos: it both quite optimal in bitrate vs quality while takes care of worstcase. It also more or less adequate even 1-pass. IIRC two-pass would allocate bits better, but difference isn't as huge as in e.g. "Average Bitrate" (=known file size) mode. So I guess 80MBps is a "peak bitrate" of CRF and rarely hit in "usual" cases. Only in some extremes, like "grab camera and swing it wildly for a while" (to totally piss off motion compensation, but you wouln't see anything meaningful either). The only "downside" I know is that file size is "unknown" (except for absolutely worst case, which is highly unlikely)
                Last edited by SystemCrasher; 03 October 2019, 09:24 AM.

                Comment

                Working...
                X