Low latency means low compression. Low compression means high bandwidth.
1080p60 NDI will be 200mbps. If you are doing 2160p60, that’s 800mbps (which is about the limit I would run 1gbe at). Doesn’t leave much overhead for anything else, and a burst of other traffic might cause packet drops or packet rejection due to exceeding the TTL.
2.5gbps would be enough.
But I see 2.5gbps and 5gbps as “stop-gaps”. Data centers standardised on 10/40gbps for a while (before 25/100 and 100/400) - it’s still really common tbh - so the 10gbps tech is cheap.
I don’t see the point in investing in 2.5/5gbps
Not all data transfer is sending stuff to storage, streaming your display live at a high bitrate for example never needs to go into storage.
Is more than 1Gbps needed for that? That seems insane, but I’m old and watch stuff in full HD so what do I know.
Low latency means low compression. Low compression means high bandwidth.
1080p60 NDI will be 200mbps. If you are doing 2160p60, that’s 800mbps (which is about the limit I would run 1gbe at). Doesn’t leave much overhead for anything else, and a burst of other traffic might cause packet drops or packet rejection due to exceeding the TTL.
2.5gbps would be enough.
But I see 2.5gbps and 5gbps as “stop-gaps”. Data centers standardised on 10/40gbps for a while (before 25/100 and 100/400) - it’s still really common tbh - so the 10gbps tech is cheap.
I don’t see the point in investing in 2.5/5gbps