Using IP(NDI) and SDI at the same time

This should pose no problem in normal deployments... in fact it is a method a lot of our clients use to have a backup option in the event a feed drops. Where you would see an issue surrounding this might be with the ZCams that do not feature isolation between the PoE and SDI ports... using a PoE Splitter will resolve this potential power issue. The other thing to consider is that because the cameras, currently, utilize a single processor, the higher the SDI / HDMI output, the higher the latency over IP. The impact of adjusting the output resolution, using a recent customer's site as an example, 1080@60 on IP and SDI / HDMI resulted in around 215ms of latency while using SDI / HDMI set for 720@30 provided 143ms of latency on IP at 1080@60.

Sync Issues

Without GENLOCK, which our cameras do not have, even via SDI there can be minor drift that occurs and is why they originally introduced the GENLOCK feature for SDI systems. Synchronization via IP is a beast of a different nature and is at the mercy of so many variables such as the specific GPU (cores / threads / freq) , CPU (cores / threads / freq), other apps running, network congestion, network configuration. There are some systems utilized in machine Vision that currently offer GENLOCK capabilities via IP (GigE is an example) using special switches and NICs but this is not something we are capable of supporting.

Long CAT runs...

As long as they were staying within the 328' specification, with wall plates and or splicers counting for an additional 10', ensuring the cabling was of high enough quality to prevent early voltage drop there should be no issues. Going beyond that 328' spec will always result in unreliable or unexpected behavior from almost any IP connected device... the good ole it may work today but don't count on it working tomorrow.

Using multiple cameras

I have found no issues in testing with an appropriate PC and network where adding 4, 5 even 14 sources has caused me any issues that did not exist with a single IP source. That main issue is the fact that the network can induce varying latencies depending on a number of situations that could occur even on an uncongested network. As mentioned above there are companies beginning to work on this issue, more for stock trading reasons, but it does mean in the future the networks will hopefully be able to perform a form of GENLOCK.

Latency

So, much like gain structure there are a number of variables at play when evaluating latency... I'll do my best to detail my own ranges from use on clean and congested networks. When using an IP video on an ideal network we typically see latency range from 90ms - 150ms (The drift is often +/- 30ms) When using IP video on an operational we often see latency in the 150ms - 500ms range (The drift is often +/- 70ms) When using IP video on a problematic network, if possible to use at all, we often see latency range from 500ms - 3 seconds. (Unsure of potential drift) These ranges should not change if the production system is actually capable of handling the load. As a small example if pulling in 4 - 5 NDI feeds the PC MUST have at least 16 threads, usually 8-core solution, to actually properly handle all of the data... while I have seen lower system work it is areas like latency and drift that can be impacted. I often check any system against the vMix Reference Systems as a starting point for evaluations.