This is an important feature - especially with HOW systems being run by volunteers. Without it, the operator must cut/dissolve to a wide camera, then hit the preset, then cut back. In many cases (especially with only 1-3 cameras), it would be smoother to simulate a human operator by and leave the camera live.
Thank you for taking the time to share the importance of this feature for you and everyone trying to accomplish the same level of production.
I can assure you that this is something we are investigating but that such movement in hardware is not the simplest problem to solve.
Stay tuned to this topic over the coming months to year to see any updates, promising or not, on trisynchronus motion for the PTZOptics camera line.
It seems like trisynchronus motion should be relatively straight forward. You know where your at now and you know where you're going. There are X number of motor steps to get from A to B in each axis (plus zoom). In the most basic form you simply adjust the speed of each axis so that all of them will arrive at the same time. If the speeds are not infinitely adjustable and can't be matched, then you have to take Y steps in one axis for every Z steps taken in another. Obviously this approach is more difficult to implement.
Trisynchronus motion is definitely one of the features we are looking at for future cameras. Vaddio and Panasonic both have this now. The other feature that is even more interesting to us is 'trace memory' where the camera can automatically play back a pre-recorded motion. We can recall a nice smooth (slow) pan and zoom motion to get from point A to point B, but not necessarily in the most direct way. These two features together would allow us to simulate what an operator can do and be able to stay live on the PTZ camera during preset recall.
Thanks Matt for the reply. I think Tom's description of the math involved is a good one. If this is difficult to implement in the firmware, what about the idea of storing preset XYZ data in software on the PC instead of relying on the camera? Then your new Windows-based remote control software could calculate the offsets and required speed between two "software presets" and execute the necessary camera movement instructions. You could even have configurable ramp-up and ramp-down settings. In addition, a string of software presets could be remembered and played back to accomplish something akin to the "trace memory" function Tom described. Using "bezier curve" preset points you could simulate any human movement.
Hello Kevin, Tim and Tom,
This is all some wonderful information and I love the ramp up and down ideas... traces may be a little ways off for us currently but another idea I would love to do.
I can say we have a few algorithms completed on this and software is something we are evaluating as an option.
I can say even in software having the pan, tilt and zoom converge simultaneously is easier said than done.
At the moment I'm having to focus most of our engineering resources on getting ready for some really exciting new products for 2019 which might delay development on the motion.
I'll update here as any updates in firmware, hardware or software come to light.
In seeing the mass of communications (e-mail, chat, forums, etc..) surrounding a desire to have this "MotionSync" feature as soon as possible I wanted to share some example footage based off our BETA implementation.
You will see two (2) additional replies below containing example videos where you will see multiple movements occurring, some better than others.
If you take special note when a failure occurs you'll see when either the X and Y movement is too short and the algorithm is unable to calculate the ratios properly it falls flat on it's face reverting to robotic movement.
In addition typically you will see a small tail end of the Zoom function as the final movement... while not perfect the quality of motion is noticeably improved.
I am happy to hear any feedback on the feature in it's current state as it may help us to further refine the calculations but I plan to release this as a BETA feature for our next full-round of firmware updates... there is NO current ETA available for the next round of firmware updates.
I will follow-up in this thread when I have a better idea of an approximate "launch" date.
"Beautiful" Long sweeping Motion Sync
More "shorter" movements with a short X movement causing the camera to revert to robotic style showcased at 19 seconds into the video.
It seems that the primary purpose of tri-synchronous motion is to be able to stay "live" on the camera while the motion is happening. If that is indeed the case then the speeds probably need to be slowed down significantly. If the image is blurry and moving 100 miles an hour, it really isn't going to matter if you're going directly there or not. If the zoom speed is the limiting factor, then you probably should slow down all the other servo motors so they don't arrive before the zoom is finished. Whichever axis is going to take the longest to reach it's final position should drive the speeds of all other axis.
Here are two examples of tri-synchronous motion in another vendor's camera:
Thank you very much for your feedback.
Speed control is handled as a minimum and maximum range, what you are currently seeing is the slowest options available for the movements requested.
The video with the larger movement is moving approximately 200 degrees horizontally from point A to point B
If I were to make another video showcasing objects in closer proximity, as one might find on a stage, the motion is noticeably slower due to the difference in position requested.
If this is of interest let me know and I can generate a new video this week.
While the math as has been described in this thread is rather easy to implement into a function it is not so easy to make affordable motors perform as well as that function.
Currently due to hardware limitations of our more affordable motors, when compared to the models linked in the video, we are unable to perform at that same level for now.
We are working on ways to improve the smoothing that should allow for finer control in future releases.
Our current goal is to provide "better" movement and with time to perfect this movement.
Please keep the feedback coming as it all goes into my development notes :-)
The progress is promising. A more typical Stage/HOW sample might be helpful, but the dramatic moves are probably more useful for diagnosing what doesn't quite work.
Of course I'm not aware of the details of the motor limitations, but I presume it has to do with the precision of the steps. Let me share some ideas based on programming work I did decades ago "sketching" house diagrams on an 80x24 unix terminal. I don't know, but maybe it will help.
For each axis, identify the low-limit, the high-limit, and the step precision available.
Calculate a scale based on how far you need to move(or what fits within the limits).
If the scale varies between the various axis, choose the smallest scale and apply to all axis math going forward.
For the first "step", divide the distance desired by the scale. Store the rounded integer component of the result in one field (this is how far we will move this time). Add the remainder (positive or negative decimal) to a carrying field for that axis. The key is to apply the lost precision to the future movements.
For each additional step, divide the distance by the scale again, but add the carrying remainder field before rounding. Move the integer amount, but calculate a new offset based on the ongoing remainder.
In my case, as we measure around a house we always end up where we started, and addition of the remainder to each step (plus or minus) ensures all lost precision is spread across the intervening movements and not left to the end.
Now I wasn't dealing with speed, but perhaps you can apply a T (Time delay) variable to the XYZ movements a carry a similar remainder/offset for the fourth dimension.
If you would like to discuss this directly by phone, don't hesitate to reach out.
Sample math example with low resolution:
X, Y precision is limited to 20 x 20.
We wish to go up 80 and right 45 in 5 steps (per axis).
Must use smaller scale of 4
(Add up the montions and end is 80 higher and 45 right of the start)
Simple application of the scale
16/4 = 4, 9/4 = 2.25 (which rounds to 2)
Remainder is always the unrounded movement minus the actual rounded movement
Actual application of the scale and rolling remainder:
U4, R2 ROUND((9/4)+(0)) = 2.25 rounded to 2, remainder= +.25, we are short .25
U4, R3 ROUND((9/4)+(.25)) = 2.50 rounded to 3, remainder= -.50, we are now .50 too far
U4, R2 ROUND((9/4)+(-.5)) = 1.75 rounded to 2, remainder= -.25, we are only .25 too far
U4, R2 ROUND((9/4)+(-.25))= 2.00 rounded to 2, remainder= 0, we don't owe anything
U4, R2 ROUND((9/4)+(0) = 2.25 rounded to 2, remainder= +.25, we are short .25 again
The final result of the physical movement is U20 and R11.
Since actual steps must be integer amounts, this is ok and expected.
The goal of 45 virtual to the right divided by a scale of 4 would be 11.25 steps, so we ended as close to perfect as possible.
Any variance is spread across the entries, in this case to the second command pair.
If you cycled another 5 step pairs back toward the start (even in a curve) you would end up with 0 remainder once you reached the origin - so long as you always add the remainder to each calculation.
Finally, as you can increase the scale the little jogs or variance between the steps becomes progressively less noticeable. The key is to carry and the then re-add that remainder so you always end up where you planned without a big jump at the end.
You definitely have some good experience with this :-)
You have a few variations on the concept from what we have been testing with that will be very interesting to trial on the hardware.
Give me a few weeks, I'm hoping, to look at re-designing the code to play nice with the rest of the currently operating system using these variations.
I'll be happy to report back here as we progress.
I greatly greatly appreciate you taking the time to not just shoot over some equations but rather provide some nice explanations alongside.
I look forward to attempting to implement this per the notes above; if I hit a road block I'll definitely take you up on your very kind offer to chat about this directly.
If anything else comes to mind I'm all ears... or eyes in this case.
Welcome to the PTZOptics forum(s)!
This thread has been created for end users to discuss the topic of PTZOptics working on Trisynchronus Motion for preset recall.
Please make sure to respect the experience level(s) of all users at all times when posting within our forums or your account may have access removed.
Currently PTZOptics provide a very standard form of robotic camera movement when recalling presets.
Please provide any thoughts or feelings on the potential addition of a camera, or cameras, with the capability to provide a form of trisynchronus motion for preset recalls.
7 people like this idea