• r00ty@kbin.life
    link
    fedilink
    arrow-up
    2
    ·
    4 hours ago

    I’m pretty sure that nic cards for those speeds would really need more hardware offloading and dma to stand a chance of those speeds. With those it should be possible. With the right hardware handling there shouldn’t be a problem, ssds connected to pci manage a lot more.

    In real terms, right now who needs it aside from to post speed test results?

    I have gigabit symmetric and can upgrade to 2.5. But, I cannot imagine we’d need 2.5 let alone 10 or 25. And I’m a fairly heavy user.

    • StarDreamer@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      2 hours ago
      1. All NICs already work off of DMA to access/copy packets into/from memory. Yes, even your $10 ones. So “would need DMA to stand a chance” doesn’t have any technical meaning other than putting a bunch of words together.

      2. The bottleneck for TCP is sequence number processing, which must be done on a single core (for each flow) and cannot be parallelized. You also cannot offload sequence number processing without making major sacrifices that result in corrupted data in several edge cases (see TCP chimney offload, which cannot handle the required TCP extensions needed to run TCP at 1Gbps). So no, “more offloading” is easy to say but not feasible.

      3. Who needs it: data centers trying to scale legacy software, or dealing with multi region data replication (rocev2 is terrible for long distance links). But no, no home user would need it