Announcement

Collapse
No announcement yet.

ASRock Z370M-ITX/ac: Mini-ITX Motherboard With Dual NICs, WiFi, Triple Display For ~$130 USD

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by torsionbar28 View Post
    Don't forget microphones, speakers, joysticks and gamepads, printers, etc. There are a ton of USB devices that have very low bandwidth requirements. Agreed, no need to waste high speed ports on these low speed devices.
    Silly trump supporters...

    Using a high-speed port for low-speed devices isn't a waste, as the user can decide freely what he wants to connect and he will always have best speed. Also usb Hubs can be used.

    Having slow-speed ports is wasting limited physical slots, and no way to work around that.

    Comment


    • #32
      Originally posted by starshipeleven View Post
      Ahem, you are talking of a device connected to a power adapter through a single port, I'm thinking about a motherboard that is supposed to be able to provide 100 watts to like 2-3 different host ports at once. Or for the very least must have a PSU that is capable of supplying 100W to a single port at any given time.
      Which isn't the same thing. I know how OEMs size their PSUs (like 100-50W less than what the system would need at full throttle and hope it never needs that much).

      Also I'm very fucking scared of people that will be running 100W through what would likely be random chinese crap cables just because they can. Every day will be a field day for firefighters.
      Agreed, these are some valid concerns. Ratings will have to be made clear as to what cable can handle what power.


      Originally posted by starshipeleven View Post
      Nonsense reasoning. With a fast port I can connect anything I need to, with USB 2.0 ports I cannot. So it's the USB 2.0 that are wasting the limited port allotment on my laptop.

      Besides, USB 3.0 hubs are cheap and with a single usb host 3.0 port connected to a USB 3.0 hub I can run keyboard, mice, a webcam, any amount of flash drives, and also a usb 3.0 hard drive at the same time with bandwith to spare (and this on USB 3.0, mind me).

      So it's not like I cannot use all high-speed ports at the same time while also using a hub to connect low-speed stuff.
      Even on a laptop, I see little need that all the USB ports be 3.0 or 3.1. One, the CPU can only have so much bandwidth to handle the speed, and on laptops, I am sure that is lower than on desktops. A few would be good, but most devices, outside storage devices just don't need it.

      As for hubs, that is actually a worse idea, unless the fundamental workings of USB has changed since its inception, which I doubt has since it is backwards compatible. Each port connected to a hub are cycled through to transfer the data. Only one device can transfer at a time using the entire bus. So, if you have several devices connected, each for a time gets the full bus, it doesn't get split up between devices. This means, if you have your mouse connected along with a flash drive with a copy going, your mouse movements will slow the transfer speed. So, some devices combined are fine, like mouse and keyboard, web cam, etc that don't need constant full use of the bus to get the maximum performance. Too many devices though in one hub, and you may find yourself noticing a delay.

      Firewire on the other hand, could split speeds between multiple devices. But it was more expensive and was only used for select items. Many actually thought USB was terrible and would never succeed against Firewire.

      Comment


      • #33
        Built my server on a ryzen 7, and when I build my next desktop (likely in a few months) it too will be ryzen based, if only because of ECC support. Everyone needs ECC, with memory density these days being as high as it is, along with how much electromagnetic radiation is around... not harmful to most things, but ECC is an absolute must.

        Comment


        • #34
          Originally posted by tiwake View Post
          Built my server on a ryzen 7, and when I build my next desktop (likely in a few months) it too will be ryzen based, if only because of ECC support. Everyone needs ECC, with memory density these days being as high as it is, along with how much electromagnetic radiation is around... not harmful to most things, but ECC is an absolute must.
          What board did you get that had ECC support? AMD didn't remove ECC support in the CPU, but it is rarely built into consumer motherboards.

          From what I have seen, ECC is only necessary in environments when dealing with large files, that must be accurate. So workstation environments involving CAD, Video or photo editing or other similar things. For day to day activities non-ECC memory is just fine. You can find a detailed report on Brian's Blog as to why he doesn't even both with ECC memory for his own NAS server, though, the one he builds and gives away each year does.

          Comment


          • #35
            Originally posted by audi.rs4 View Post
            Even on a laptop, I see little need that all the USB ports be 3.0 or 3.1. One, the CPU can only have so much bandwidth to handle the speed,
            What is this supposed to mean? The USB controllers are almost always sitting on PCIe or (the ones integrated in chipset) on CPU-chipset interconnection bus, which is plenty fast.

            A single lane of PCIe 2.0 (still common in laptops) is 500 MB/s full duplex (so 500 MB/s up AND down simultaneously) for example. And it's not crap like USB, it has little overhead, that's near its actual top speed.

            A few would be good, but most devices, outside storage devices just don't need it.
            Well, a low and midrange laptop has like 3 USB ports usually, I don't think I'm asking too much if I want them all to be highest speed possible. On desktop I can have all ports I want, so who cares.

            Also, USB-C (true fully-fledged one) is so much more than just plain bulk data. It carries video and PCIe ports.

            This means, if you have your mouse connected along with a flash drive with a copy going, your mouse movements will slow the transfer speed.
            This is technically right but irrelevant in practice. Mice/keyboard/input devices need what, 2-3 Kb/s? (many are USB 1.1 devices even nowadays)
            Only thing that matters for input devices is low lag, and I never saw issues even with USB 2.0 hubs.

            If you see here, https://msdn.microsoft.com/en-us/library/ms894725.aspx USB can label its traffic, and most storage devices use "bulk transfer" or UASP (newer) protocols, so the USB hub knows this stuff isn't affected by lag, while input devices notify this by choosing "interrupt" protocol and get prioritized.

            And USB 3.0 and later have so much raw bandwidth that the fact the USB protocol is retarded becomes irrelevant because of sheer brute force. They did implement some smarter traffic control (packets are routed only to the device they are for, not to all devices in the hub) though.

            I can basically max out the read speed of 2 external 3.5'' hard drives at once, over a single host port (and hub). Which is around 150 MB/s transfer rate total. Real life testing. I've cloned drives sitting on the same USB 3.0 hub and it progressed at the same speed it would have if I connected the hard drives on sata ports (in my desktop PC).

            Firewire on the other hand, could split speeds between multiple devices.
            In practice that part acted the same as USB, as also USB "splits speed" between devices.

            Sure Firewire was a P2P protocol so any device could initiate transfers to/from any other without any master device (PC) giving a damn, so it was daisy-chainable and so on and so forth.

            Pretty cool, but devices using it required much more advanced (and expensive) controllers than USB (and I mean controllers inside the device itself), which barred its widespread use everywhere.

            Comment


            • #36
              Originally posted by audi.rs4 View Post
              What board did you get that had ECC support? AMD didn't remove ECC support in the CPU, but it is rarely built into consumer motherboards.
              see here http://www.overclock.net/t/1629642/r...c-motherboards

              There are articles where they were testing ECC by overclocking RAM (to make it unstable) and seeing the ECC error logs in Linux, they were using an Asrock x370 Taichi http://www.hardwarecanucks.com/forum...ep-dive-4.html

              (they expected linux to shut down on uncorrectable errors, but Linux does not do that unless the uncorrectable error is in the kernel RAM, it just terminates the affected application)

              You can find a detailed report on Brian's Blog as to why he doesn't even both with ECC memory for his own NAS server
              Not having ECC on a storage server (i.e. a device where you keep files whose integrity is important) is kinda retarded imho.
              Once you're past the 4 drives in your system the cost difference becomes not that relevant anymore.

              Comment


              • #37
                Originally posted by starshipeleven View Post
                ECC support has been ONLY on C-chipsets since at least Sandy bridge, afaik. You can usually mount Xeons on other chipsets too, but ECC won't work (what's the point of a Xeon on such boards again?).

                Which means that you can find them used relatively easily, also older Xeons and other consumer-grade processors where ECC support was turned on.
                I seemed to remember that Q-series chipsets supported ECC, but you must be right about that one, just seems very intelish to restrict ECC to WS-grade chipsets.
                Nevertheless, Xeons were supported by consumer-targeted boards up until Skylake. Also, Xeons had 20 PCIe lanes before Skylake, compared to the 16 lanes found in the Core i-series, which offered nice expansion options. ("i5") Xeons also had bigger caches, and the top-of-the-line had considerably higher out-of-the-box frequencies than Core i7s.

                All this gave some flexibility when you were upgrading your setup piecemeal, or when you were buying used parts. Neither of which is a concern to manufacturers, of course.

                Comment


                • #38
                  Originally posted by starshipeleven View Post
                  Uhm, could you point me to an official statement of what you said (that only 3.1 can deliver 100w)?
                  That isn't official enough for you? Regardless, just about any reputable source will tell you 3.1 can deliver 100W:
                  I was intrigued to hear that Benson Leung, a Google engineer working on the Chromebook Pixel, had gone on a one-man crusade to help early adopters of

                  https://www.asus.com/Motherboard-Acc...-31-UPD-PANEL/
                  https://www.anandtech.com/show/9004/...losures-tested
                  Just google "USB 3.1 100W" if you're not satisfied.
                  By looking at the links and the docs it seems more like the power delivery is a random opt-in feature that is unrelated to USB version. (and is also theoretically available for USB 2.0 ports too? WTF?! Who the fuck thought it was a good idea?!)
                  The "opt-in" is precisely why I am not fond of this. The point of USB is to be universal, and in order for a piece of tech to be universal, it must be consistent. This power delivery system is far from that - take note that it can go up to 20v. As I've said in other USB 3.1 topics, if a device requires more than 10W but the PC's USB ports don't support that much power, either the consumer will be confused, or, it requires the user to use a separate power cord, effectively defeating the purpose of delivering power through the same cord. This also means manufacturing costs go up (due to the additional components involved in handling USB's variable power delivery) and it increases the chances of something going wrong.
                  The real danger is the USB cords themselves; despite having the proper connectors, they will not be universal. The cheap type C cord you use to charge your phone is not going to be able to handle 100W of power. And, there's also the risks involved with blowing out PSUs. The list of problems goes on and on.
                  Yeah, the only thing that can save the day is Intel/AMD dumping any USB 2.0 controllers from their chipsets.
                  USB 2.0 I'm fine with - that's as un-complicated as USB gets. The mistake was having USB 3.1 with 3 different speeds, 3 different port types, variable power delivery, and all the caveats that coincide with those things.
                  Last edited by schmidtbag; 13 October 2017, 09:41 AM.

                  Comment


                  • #39
                    Originally posted by schmidtbag View Post
                    That isn't official enough for you?
                    Nevermind, I got confused because the hardware between 3.0 and 3.1 gen1 is the same. Power delivery can't be added on 3.0 because now it is called 3.1 gen1, but it is still technically the same hardware.

                    Comment


                    • #40
                      Originally posted by starshipeleven View Post
                      They are workstation boards, also ASUS and others have at least one workstation board with the workstation-class chipsets (those that support the integrated graphics) and ECC support, but they usually make ATX or mATX, not mITX.
                      I do not consider boards like that worthy of the term "workstation". They do not even have basic out-of-band management like Intel AMT, not to mention full-blown vPro (basic AMT + VNC). Event lowly Dell OptiPlex have them so why not offer the option? With Microsoft being iffy about having Wake-on-LAN enabled since Windows 8 it has become essential to have a feature like that.

                      Originally posted by starshipeleven View Post
                      Intel enables ECC support on some select processors (randomly selected by their marketing people?). I've seen i3 too with ECC support, and in seven gen they have an i7 with ECC support too, no i5 though.

                      I usually get Xeons only just because I'm not in the mood to play hide-and-seek to get the right Pentium/celeron/whatever when I'm assembling a system that needs ECC.
                      Heh, yeah that's what they do. They tried to do the same with VT-x a long time ago but caved in to the pressure and made it a standard feature.

                      Comment

                      Working...
                      X