I've been doing quite some VP9 encodings and inevitably stumbled on Google's own encoding recommendations (that they probably use themselves).
If I remember properly, they do not really go for "target bitrate". They prefer more logical thing, called CRF. That is: you request "target quality" level and also cap max bitrate. What's the point? "Easy" sequences/segments would take just as much bits as they really need to achieve requested quality level, undershooting well below of max bitrate specified. So it more or less similar to Q-mode. "Hard" segments would get capped to max bitrate, keeping worst-case bitrate at acceptable level, so it can still be streamed reasonably (Q-mode could give huge bitrates there, it screws up streaming).
This mode almost perfect for fire-and-forget encodings like what youtube needs for videos: it both quite optimal in bitrate vs quality while takes care of worstcase. It also more or less adequate even 1-pass. IIRC two-pass would allocate bits better, but difference isn't as huge as in e.g. "Average Bitrate" (=known file size) mode. So I guess 80MBps is a "peak bitrate" of CRF and rarely hit in "usual" cases. Only in some extremes, like "grab camera and swing it wildly for a while" (to totally piss off motion compensation, but you wouln't see anything meaningful either). The only "downside" I know is that file size is "unknown" (except for absolutely worst case, which is highly unlikely)
Announcement
Collapse
No announcement yet.
The Free Software Foundation Endorses First Router In 3 Years - But It's 10/100 + 802.11n WiFi
Collapse
X
-
Guest repliedOriginally posted by SystemCrasher View PostOn side note, only some new HW supports VP9, and doing 100Mbps VP9 in software ... hum... interesting idea. Should be taxing for any cpu.
Leave a comment:
-
Originally posted by DoMiNeLa10 View PostIt depends on the site, YouTube tends tot cap out at around 280mbps for me, and Steam seems to be able to handle any amount of bandwith I throw at it, so I assume they'd serve 10gbps to people who have such connections. There are mainstream uses where high bandwidth is useful, and there's always BitTorrent. Microsoft takes advantage of people's connections as well to send updates among devices.
I don't think there's enough bandwidth, as it's quite easy to have an use where you'll want more.
1) Faster and more reliable - wired takes less processing on router side and isnt sensitive to noise and other systems activity. Router's CPU and RAM are of more concern in this regard I guess. If someone wants like gigabit and at least some traffic processing they likely want full fledged PC chewing on that, even most powerful SBCs would struggle and typical router HW wouldn't really handle it. Except maybe in HW accelerator, that fairly dumb so you wouldn't have advantage from fact is Linux (Linux can do a lot of fun things).
2) Air is shared medium. If one lives in desert it isn't a problem to hog "air time" indefinitely. However in populated areas it can upset others. High modulation indexes are picky about noise, wide bandwidth of fastest modes ensures it quite likely to become "noise" for someone else, and if this trend goes on, at some point weird things happen. On side note, OpenWRT proven to be quite dumb about "frequency agility", even nightly at most can sniff air at boot-up to select channels, but it gives no crap to move away if there is some interference source comes later. Ironically, many "stock" firmwares much more agile - so they would readily choose channels where openwrt elected to settle. In best case they pike off, but it depends on remote system's will, so it may or may not happen, lol. On side note, I've sent few funny packets to most exceptional hogs in "dense" areas, somehow it could be verrrry persuading.
As for youtube, youtube and other Google things are qutie interesting. First they got numersous servers across globe, expertise and will to do global-scale CDN-like thing and even managed to get it right. But Google is nearly one company on the planet who managed to get there. Interestingly not all movies are equal. Popular are more widely cached across globe so can come from close server. Less popular things cached much less, so wouldn't download as fast as well.
I've also observed details and found youtube saves pennies even by unwilling to cache more than maybe ~2 minute of video to buffer. Getting more than that needs you actually watching it, this happens in background anyway. As long as 2-min average exceeds bandwidth of video you don't see problems. At most I can imagine a bit faster prebuffereing or so, but it only makes sense for really short movies.
Google is one of very few who managed to get youtube behaving very well even on "imperfect" connections and even got "adaptive" DASH reasonably. Should for some reason connection fail to keep up with data rate it would fetch next chunks with lower resolution. It somewhat degrades picture, but transparent process. That shoulnd't normally happen even on best available youtube content on 100Mbps though. Google would not dedicate 100Mbps bandwidth to one client, it costs too much. They do hell a lot to reduce bandwidth use, like improving VP9, making best content only available in VP9, working really hard on AV1 encoder, etc. On side note, only some new HW supports VP9, and doing 100Mbps VP9 in software ... hum... interesting idea. Should be taxing for any cpu.
Leave a comment:
-
Guest repliedOriginally posted by SystemCrasher View Postmost servers just would not serve you at gigabit speeds
I don't think there's enough bandwidth, as it's quite easy to have an use where you'll want more.
Leave a comment:
-
Originally posted by DoMiNeLa10 View PostI don't remember whether the median Internet connection speed surpassed 100mbps like a year ago, at least in Europe.Last edited by SystemCrasher; 01 October 2019, 04:06 PM.
Leave a comment:
-
Its rather simple: all .ac radios I know require firmware. And of course it would be murky non-free blob doing hell knows what. GNU mission was never about pleasing dumbass consumers who's wish is just consume more, more and more - ignoring all the consequences, be it global warming, planet pollution, giving up freedoms or something. Oh, yes, there is no such thing as free lunch. And even if you paid for your lunch, these days it means very little and often wouldn't really prevent getting treated like mouse caught in mousetrap.
Additionally, I have to admit .ac is short-ranged. Even .n would barely work in large house, falling off to lowest modulations (or even ,g rates) on 2.4GHz. And these 5GHz haves even worse penetration compared to .n. The higher freq, the more challenging it to gets on its own, reducing power transmitted, increasing absorbtion by obstacles and so on. And then energy spread by rather thin layer over wide spectrum to keep speed fast. It makes things very fragile and short ranged. So it works fine if you sit on your router, but 2 rooms away from it it could look like entirely different story, especially if one looks on wireless stats, measures actual transfers speeds and so on.
Even more funny, virtually no dumbass customers capable of buildinng sane "core network" and roamable APs to compensate for all that . At most there're expensive "professional" solutions to do that, but it clearly overkill for typical house. So this "faster, faster, faster" race leads to "worse and worse and worse coverage". Some vendors already dare to violate regulatory by transmitting way more power than allowed by regulatory, to ensure it at least works anyhow for their customers, but it kinda foul play and cheating, to begin with.
Ahh and how it works? You get stock firmware doing like 24-27dBm (to get at least anyhow sane range), flash OpenWRT to get rid of nasty spying/vendorlocked blob, it dwindles down to shy 20dBm, coverage goes nowhere ... you suddenly figure out half of channels disabled for no apparent reason, and you either mumble "opensource sucks" and stick with factory blobs and all their downsides or resort to rampant hacking to re-enable HW capabilities stock firmware had out of the box. Like if this folly wasn't already enough, FCC and somesuch would get jealous and demands signatures or something. To ensure you either enjoy treacherous blob or get shitty HW performance if you try to escape from jail.
Whatever, do you ppl really think it should work like that? That's where FSF eventually gets point... and if modern HW more and more turns into treacherous, backdoored, malicious or misbehaving shit, it hardly FSF's fault. Laughing on FSF on these grounds is utterly dumb to say the least.
Leave a comment:
-
Guest repliedOriginally posted by pal666 View Postall non-entry level routers are higher than n. you probably will get more sustained throughput via gigabit ethernet, but surely in good conditions you can see higher wireless speeds at least sometimes. same goes for n vs 100mbit
Originally posted by pal666 View Postmany things make no sense from some pov but are still being sold. most people do not have available wan bandwidth higher than 100mbit anyway, that's why i said it is acceptable for them
Leave a comment:
-
Originally posted by DoMiNeLa10 View PostIn case of 802.11n, the best real world performance you can expect is ~100mbps, and that's with a proper 4x4 setup. I don't recall seeing a router with a wireless connection that's faster than wired ones, as it makes no sense whatsoever.
Originally posted by DoMiNeLa10 View PostNormies will want to be able to use all bandwidth they have, and having the wired part be the bottleneck makes no sense.Last edited by pal666; 01 October 2019, 10:38 AM.
Leave a comment:
-
Originally posted by alcalde View PostAnd yet we all use it and it works just fine.
- Likes 1
Leave a comment:
-
Guest repliedOriginally posted by pal666 View Postsubj
actually it is pretty common for routers to have more capable wireless connections than wired onec. subj is not an exception
- Likes 1
Leave a comment:
Leave a comment: