The objective of this technique is to provide a way for people who are deaf
or otherwise have trouble hearing the dialogue in audio visual material to be
able to view the material. With this technique all of the dialogue and
important sounds are available in a text stream that is displayed in a
caption area.
With SMIL 1.0, separate regions can be defined for the video and the
captions. The captions and video playback are synchronized, with the caption
text displayed in one region of the screen, while the corresponding video is
displayed in another region.
Examples
Example 1: SMIL 1.0 caption sample for Quickime player
The example shows a <par> segment
containing a <video> and a
<code><![CDATA[<textstream> tag. The system-captions attribute indicates that the
textstream should be displayed when the user's player setting for
captions indicates the preference for captions to be displayed. The
<layout> section defines the regions
used for the video and the captions.
Example 3: SMIL 1.0 caption sample with internal text streams