- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
CC and multiple audio tracks for dash playback
During dash playback in Roku video player, we are facing the below issues,
1. The language field of availableSubtitleTracks in video node is empty, so we are unable to identify the language for the particular CC,
In Video node:
availableSubtitleTracks = [{
Description: ""
Language: ""
TrackName: "eia608/CC1"
}]
In mpd:
<Accessibility schemeIdUri="urn:scte:dash:cc:cea-608:2015" value="CC1=eng;CC3=swe"/>
2. Unable to identify the type of the audio tracks such as stereo or dolby, the name field of availableAudioTracks in video node is empty and this field is used to identify the type of audio tracks. To get the value in name field do we need to add anything in the DASH mpd.
In video node:
availableAudioTracks = [{
Format: "ISO/IEC 14496-3, Advanced Audio Coding, ASC header"
Language: "eng"
Name: ""
Track: "dash/a~AAC~eng~main~2"
}]
Can you please provide the example of the DASH mpd to get the values for the above mentioned?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: CC and multiple audio tracks for dash playback
We are facing similar issue with embended cc.
If we set subtitleConfig by default "eia608/1" and the video manifest doesn’t contain any subtitle, then it will add a subtitle track as “subtitle 1” with invalid info (when the user selects that subtitle it doesn’t display anything).
If the video manifest contains subtitles, then it will add the proper subtitles with valid info.
AdaptationSet groups together representations of the same type (video, audio, or captions). Audio description in .mpd (DASH) - example from the internet:
<Accessibility schemeIdUri="urn:tva:metadata:cs:AudioPurposeCS:2007" value="1"> </Accessibility> <Role schemeIdUri="urn:mpeg:dash:role:2011" value="alternate"> </Role>
Example with cc is shown above in the prev post.
If there is no options except parsing the manifest, how we can validate we are relying on subtitles accessability data, not audio? What is the marker?