Synchronisation related use cases for Open Screen Protocol
Opened this issue · 1 comments
The Second Screen CG is developing the Open Screen Protocol, a new open standard protocol designed to support the Presentation API and Remote Playback API.
With the Remote Playback API, a media element in a controlling web page (e.g., on a mobile or laptop device) can be put into a state in which the video is actually being rendered on another device (e.g., a TV).
The Presentation API allows a controlling web page to open another page on a receiver device, and a communication channel is established between the two pages for those pages to exchange messages.
At the recent Second Screen CG F2F meeting, I was asked me to request input from the Media & Entertainment IG for synchronisation related use cases to help with the development of the Open Screen Protocol.
For the Remote Playback API, a question arises around the level of accuracy that the media element's currentTime
should have. One use case is simply showing the media playback position (i.e., time counter) in the controlling page. Arguably, as this typically shown with a resolution of 1 second, then the accuracy can be within, say, 0.5 seconds. But there may be other use cases in which having more accurate knowledge of the playback position is needed, for example, showing content on the controlling page that is intended to be closely synchronised with the media playback on the receiver device.
For the Presentation API, either the controlling or receiving web page may choose to play audio or video content, with additional content on the other device.
If you have such use cases, please leave a comment here to describe or link to them, so that we can then share with the CG.
Many thanks!
We use a number of existing tools where video frame level accuracy is required in our business, where this kind of interaction is required - though not quite the use case you describe it is similar.
Consider the case where users wish to review a media stream in multiple locations to perform some collaborative review, where they may draw annotations on top of the media. In this case it is important to know exactly which frame you are drawing on in order to replicate the annotation on the other users screen, as an example of one such tool https://www.ftrack.com/en/features#reviews-approvals
We also use other non-web based tools that allow for multiple controlling end points where it is critical when user A hits 'stop' that all other users can be synchronised to display the exact same frame.
Equally I could imagine a user with a Tablet controlling a playback device feeding to external displays where we would require exact frame seeking to sub-sets of the media - in the case of post production where you have unfinished media and you have 'handles' of extra frames at both ends of the media (areas of footage not present in the final edited product). Here you often want to only go to the beginning of the cut which might be a given number of frames from the start, knowing where you are exactly within the smaller cut range is needed to correctly draw the timeline cursor, etc.