Originally posted by Ardje
View Post
Announcement
Collapse
No announcement yet.
GNU Wget2 Reaches Beta With Faster Download Speeds, New Features
Collapse
X
-
Originally posted by schmidtbag View PostI wasn't aware wget was so heavy on CPU usage. What exactly makes it so high to warrant multithreading? Even on 100Mbps down, I always got the impression it was bottlenecked by the network, though I can't say I ever really paid close attention to CPU usage whenever I use it. I suppose if you're downloading something on a LAN, the CPU usage will spike. But at that point, why not just use some other local file transfer service?
To clarify - I'm not complaining they made it multithreaded, I'm just a bit surprised it demands so much.
In many cases Wget2 downloads much faster than Wget1.x due to HTTP2, HTTP compression, parallel connections and use of If-Modified-Since HTTP header.
- Likes 1
Comment
-
Originally posted by schmidtbag View PostI wasn't aware wget was so heavy on CPU usage. What exactly makes it so high to warrant multithreading? Even on 100Mbps down, I always got the impression it was bottlenecked by the network, though I can't say I ever really paid close attention to CPU usage whenever I use it. I suppose if you're downloading something on a LAN, the CPU usage will spike. But at that point, why not just use some other local file transfer service?
To clarify - I'm not complaining they made it multithreaded, I'm just a bit surprised it demands so much.
- Likes 1
Comment
-
Originally posted by Ardje View Post
The current wget's support for IPv6 is a hack. There are so many ways it does not work. I switched to using curl sometimes combined with socat (a 4 year old curl needs socat, because curl also had a tiny bug 4 years ago, but that's something I still need to use).
Too bad, because in every other way, wget was really user friendly. But the IPv4 was deeply in the code and commandline scanners. Even the busybox version of wget was ahead of the main wget in respect of IPv6.
Anyway, that was the case with many utilities. That's how I found socat: after hacking ipv6 support into utility n, I found socat and stopped doing things that socat already did correctly more than 4 years ago.
- Likes 1
Comment
-
I think the most exciting feature is the command line options for --tls-session-file and --tls-resume. These have the potential to greatly improve performance for multiple calls that hit the same web server.
Things I would have liked to see that didn't make the cut:
- Additional better persistence between runs such as starting a sub-process as a per user service. Additional runs could perform the network connection over sockets to the existing service. This could reduce the number of SYN/SYN-ACK packets resulting from multiple runs. If the service is not used for an extended length of time, it could automatically quit.
- Directly support DNS over HTTPS (DoH). It is possible to use OS libraries that provide resolving over DoH but it would be nice to also be able to specify Wget2 perform DNS resolutions itself as an alternative of going through the OS resolver.
- SOCKS4 and SOCKS5 proxy support. It would be nice to sometimes use a SOCKS proxy (such as the ssh client -D option flag) to access sites that can't be accessed directly and when there is HTTP proxy available. curl can be used to do this but it would be nice to also be able to do it with wget2 instead.
Comment
-
Comment