Interoperable Master Format (IMF)
|
|
|
- Garry Griffin
- 10 years ago
- Views:
Transcription
1 Interoperable Master Format (IMF) Specification February 19, 2011 Version Copyright 2009, 2010, 2011 by Entertainment Technology Center 509 West 29th Street Los Angeles, California United States of America
2 NOTICE The Entertainment Technology Center at the University of Southern California (ETC) is the author and creator of this draft specification for the purpose of copyright and other laws in all countries throughout the world. The ETC copyright notice must be included in all reproductions, whether in whole or in part, and may not be deleted or attributed to others. ETC hereby grants to persons and entities reviewing this document a limited, non-exclusive license to reproduce this specification for their own use, provided it is not sold. Inquiries regarding permission to reproduce or distribute this specification should be directed to the Entertainment Technology Center at the University of Southern California; Attn: CEO and Executive Director; 509 West 29 th Street, Los Angeles, California 90007; USA; (213) (voice); (213) facsimile; [email protected]. This document is a draft specification developed by the Interoperable Master Format (IMF) project participants, as hosted by the ETC. It is intended solely as a guide for companies interested in developing products that can be compatible with other products developed using this document. Neither the IMF project participants nor ETC shall be liable for any direct, exemplary, incidental, proximate or consequential damages or expenses arising from the use of this document. This document defines only one approach to compatibility regarding its subject matter, and other approaches may be available. By creation and distribution of this draft document, no position is taken by the IMF project participants or ETC with respect to the validity or infringement of any patent or other proprietary right. The IMF participants and ETC hereby expressly disclaim any liability for infringement of intellectual property rights of others by virtue of the use of this document. The IMF participants and ETC have not and do not investigate any notices or allegations of infringement prompted by the distribution of this document, nor do the IMF participants or ETC undertake a duty to advise users or potential users of this document of such notices or allegations. The IMF participants and ETC hereby expressly advise all users or potential users of this document to investigate and analyze any potential infringement claims, seek the advice of intellectual property counsel, and, if indicated, obtain a license under any applicable intellectual property right or take the necessary steps to avoid infringement of any intellectual property right. The IMF participants and ETC expressly disclaim any intent to promote infringement of any intellectual property right by virtue of the creation or distribution of this document. 2
3 Table of Contents 1 IMF INTRODUCTION INTRODUCTION SCOPE BASIC AND EXTENDED LEVELS SYSTEM OBJECTIVES DOCUMENT LANGUAGE SYSTEM OVERVIEW DOCUMENT FRAMEWORK BASIC AND EXTENDED LEVEL REFERENCES OVERVIEW IMF SYSTEM ARCHITECTURE MAJOR SYSTEM KEY CONCEPTS BUSINESS-TO-BUSINESS SOLUTION DIGITAL SOURCE MASTER (DSM) ARCHIVE NOT IN SCOPE FILE/FRAME-BASED SYSTEM ESSENCE AND DATA ESSENCE METADATA SECURITY WRAPPING AND TRACK FILE CREATION COMPOSITION VERSIONS SEQUENCE OUTPUT PROFILE LIST PACKAGES
4 2.3 IMF ELEMENTS AND PROCESSES ESSENCE OVERVIEW INTRODUCTION ESSENCE SYSTEM OVERVIEW ESSENCE FUNDAMENTAL REQUIREMENTS COMMON ESSENCE FILE FORMATS FRAME RATES AND SYNCHRONIZATION IMAGE SPECIFICATION IMAGE CONTAINER, ACTIVE IMAGE, PIXEL ASPECT RATIO PIXEL ASPECT RATIO BIT DEPTH CHROMA SUBSAMPLING COLOR SPACE, COLOR SPACE GAMUTS STEREOSCOPIC CONTENT COMPRESSION REQUIREMENTS Image Compression Requirements Intra-Frame Compression Spatial Resolution Scalability Image Compression Codecs UNCOMPRESSED REQUIREMENTS STRUCTURAL METADATA Image Metadata Required Fields AUDIO SPECIFICATION AUDIO FILE FORMAT SAMPLING RATE
5 3.4.3 FRAME RATE/AUDIO SPEED ALLOWABLE SAMPLES PER FRAME AUDIO BIT DEPTH AUDIO TRACK FILE CONTENT AUDIO TRACK FILE ELEMENT CONSTRAINT AUDIO TRACK FILE LANGUAGE CONSTRAINT TRACK FILE LOUDSPEAKER CHANNEL CONTENT CONSTRAINT CHANNELS PER TRACK FILE CONSTRAINT SIMULTANEOUS MULTIPLE AUDIO TRACK FILE AVAILABILITY AND PLAYOUT MULTIPLE AUDIO TRACK FILE ROUTING MIXING OF AUDIO CHANNELS SOUNDFIELD CONFIGURATION CONSTRAINTS CHANNEL LAYOUT AND MAPPING AUDIO METADATA DATA ELEMENTS DATA ESSENCE OVERVIEW INTRODUCTION DATA ESSENCE SYSTEM OVERVIEW MAJOR DATA ESSENCE CONCEPTS AND DEFINITIONS SUBTITLES CAPTIONS CLOSED OPEN DATA ESSENCE FUNDAMENTAL REQUIREMENTS COMMON FILE FORMATS FRAME RATES
6 SYNCHRONIZATION SUB PICTURE (PRE-RENDERED OPEN TEXT AND/OR GRAPHICS) DESCRIPTION FILE FORMAT COMPOSITION TEXT COLOR INTERPRETATION FRAME RATE AND TIMING SYNCHRONIZATION TIMED TEXT (PRESENTATION OF TEXT IN SYNC WITH AUDIO AND VIDEO) DESCRIPTION FILE FORMAT DEFAULT FONTS IDENTIFICATION SEARCHABILITY MULTIPLE CAPTIONS SYNCHRONIZATION STEREOSCOPIC OFFSET DATA ESSENCE SPECIFICATION DYNAMIC METADATA OVERVIEW INTRODUCTION DYNAMIC METADATA SYSTEM OVERVIEW MAJOR DYNAMIC METADATA CONCEPTS DYNAMIC METADATA FUNDAMENTAL REQUIREMENTS COMMON FILE FORMATS STORED TIME CODE REQUIREMENTS
7 5.3 PAN AND SCAN SPECIFICATION PAN AND SCAN REQUIREMENTS TIME CODE AND MOVEMENT IDENTIFICATION SOURCE IMAGE, OUTPUT IMAGE AND VIEWPORT PAN AND SCAN METADATA REQUIRED FIELDS WRAPPING OVERVIEW INTRODUCTION WRAPPING REQUIREMENTS INTRODUCTION FORMAT INFORMATION METADATA SYNCHRONIZATION SPLICING SECURITY EXTENSIBILITY SIMPLE ESSENCE MXF TRACK FILE ENCRYPTION (EXTENDED LEVEL) INTRODUCTION STANDARDS IMAGE TRACK FILE INTRODUCTION FRAME BOUNDARIES COMPRESSION STANDARDS AUDIO TRACK FILE
8 6.5.1 INTRODUCTION STANDARDS METADATA TIMED TEXT TRACK FILE INTRODUCTION STANDARDS TIME CODE TRACK FILES (EXTENDED LEVEL) INTRODUCTION FRAME RATE FAITHFUL CAPTURE FLEXIBLE OUTPUT FRAME BOUNDARIES METADATA THE COMPOSITION INTRODUCTION OVERALL REQUIREMENTS SYNCHRONIZATION HUMAN READABLE METADATA STEREOSCOPIC CONTENT FILE FORMAT AUDIO SYNCHRONIZATION AUDIO EDITING GRANULARITY CROSSFADES AND TRANSITIONS TERMINOLOGY SYNCHRONIZATION
9 7.5 COMPOSITION PLAYLIST STRUCTURE ID ANNOTATION [OPTIONAL] ISSUEDATE ISSUER [OPTIONAL] CREATOR [OPTIONAL] CONTENTTITLETEXT CONTENTKIND [OPTIONAL] CONTENTVERSIONLIST [OPTIONAL] ID LABELTEXT ESSENCEDESCRIPTORLIST [OPTIONAL] COMPOSITIONTIMECODE [OPTIONAL] TIMECODEDROPFRAME TIMECODEINTERLACE TIMECODERATE TIMECODESTARTADDRESS EDITRATE TOTALRUNNINGTIME [OPTIONAL] LOCALELIST [OPTIONAL] ANNOTATION [OPTIONAL] LANGUAGELIST [OPTIONAL] COUNTRYLIST [OPTIONAL] REGIONLIST [OPTIONAL] RATINGLIST [OPTIONAL] AGENCY
10 LABEL SEQUENCELIST SIGNER [OPTIONAL] SIGNATURE [OPTIONAL] SEQUENCE STRUCTURE ID ANNOTATION [OPTIONAL] SEQUENCEISSUEDATE [OPTIONAL] SEGMENTLIST SEGMENTTYPE ID TRACKID [OPTIONAL] RESOURCELIST IMAGESEGMENT [OPTIONAL] AUDIOSEGMENT [OPTIONAL] SUBTITLESEGMENT [OPTIONAL] TIMECODESEGMENT [OPTIONAL] MARKERSEGMENT [OPTIONAL] CAPTIONSEGMENT [OPTIONAL] EXTENSION (DEFINING NEW KINDS OF SEGMENTS) RESOURCE STRUCTURE BASERESOURCETYPE ID ANNOTATIONTEXT [OPTIONAL] INTRINSICDURATION INTRINSICEDITRATE [OPTIONAL]
11 7.7.6 ENTRYPOINT [OPTIONAL] REPEATCOUNT [OPTIONAL] SOURCEDURATION [OPTIONAL] TRACKFILERESOURCETYPE ASSETID SOURCEENCODING KEYID [OPTIONAL] HASH [OPTIONAL] STEREOIMAGETRACKFILERESOURCETYPE MARKERRESOURCETYPE MARKERLIST LABEL ANNOTATIONTEXT [OPTIONAL] OFFSET CONSTRAINTS CONSTANT FRAME AND SAMPLE RATE MINIMUM TRACK FILE DURATION MINIMUM SEQUENCE DURATION FRACTIONAL SAMPLE EDITORIAL GRANULARITY STEREOSCOPIC CONTENT ESSENCEDESCRIPTOR INFORMATION GENERIC SOURCEMEDIADESCRIPTION STANDARDBADGE STANDARDSBODY LABEL
12 7.9.6 TIMECODETYPE IMAGE FRAMERATE CONTAINERHEIGHT CONTAINERWIDTH COLORSAMPLING COLORENCODING SAMPLETYPE IMAGEBITDEPTH IMAGEASPECTRATIO ACTIVECOORDINATES CODEVALUEBLACK CODEVALUEWHITE AUDIO SAMPLERATE AUDIOSAMPLESPERFRAME SOUNDFIELDCONFIG SAMPLEBITDEPTH CHANNELCOUNT LANGUAGE [OPTIONAL] SUBTITLE AND CAPTION ESSENCE FORMAT [OPTIONAL] LANGUAGE [OPTIONAL] OUTPUT PROFILE LIST INTRODUCTION OUTPUT PROFILE LIST DEFINITION
13 8.2 OUTPUT PROFILE LIST OVERVIEW FUNCTIONAL FRAMEWORK OPL RELATIONSHIP OVERVIEW FUNCTIONAL FRAMEWORK OUTPUT PARAMETERS PRE-PROCESS PARAMETERS COLORSPACE CONVERSION PARAMETERS ENCODING/TRANSCODING PARAMETERS OUTPUT PROFILE LIST FUNDAMENTAL REQUIREMENTS (BASIC LEVEL) OPEN STANDARD INTEROPERABLE SCALABLE EXTENSIBLE SYNCHRONIZATION HUMAN READABLE METADATA FILE FORMAT OUTPUT PROFILE LIST CONSTRAINTS REFERENCE TO A CPL (BASIC LEVEL) PRECEDENCE OF OPERATIONS (EXTENDED LEVEL) OUTPUT PROFILE LIST STRUCTURE GENERAL INFORMATION HEADER [REQUIRED] UNIQUE ID ANNOTATIONTEXT [OPTIONAL] ISSUEDATE ISSUER [OPTIONAL]
14 8.5.7 CREATOR [OPTIONAL] CONTENTTITLETEXT CONTENTKIND [OPTIONAL] COMPOSITIONPLAYLIST REFERENCE OUTPUTTEXT [OPTIONAL] IMAGEOUTPUTFORMAT [OPTIONAL] BitRate COLORENCODING StandardsBody [optional] ColorSpace [optional] ChromaFormat [optional] ChromaEncoding [optional] BitDepth [optional] TransferFunction [optional] CodeRange [optional] COMPRESSION STANDARD CompressionType [optional] Label [optional] PictureCoding [optional] ProfileType [optional] LevelType [optional] StandardsBody [optional] Label [optional] FRAMERATE [OPTIONAL] CROP [OPTIONAL] CANVASCOORDINATES [OPTIONAL]
15 AUDIOOUTPUTFORMAT [OPTIONAL] SampleRate [optional] BitDepth [optional] SamplesPerFrame [optional] SpeedChange [optional] PitchCorrection [optional] CompressionStandard [optional] BitRate [optional] StandardsBody [optional] Label [optional] AUDIOCONFIG [OPTIONAL] ChannelNN [optional] Language [optional] Config [optional] Channel [optional] Label [optional] OUTPUTSPEEDOFFSET [OPTIONAL] COLORTRANSFORMS [OPTIONAL] ENCODINGFORMAT [OPTIONAL] PREPROCESSOPS [OPTIONAL] SCALING TIMECODECHANGE OVERLAYTYPE OPL EXAMPLES A SIMPLE OPL (BASIC LEVEL) A LEVEL-1 COMPLEX OPL (EXTENDED LEVEL)
16 8.6.3 A LEVEL-2 COMPLEX OPL (EXTENDED LEVEL) AN OVERLAY PRE-PROCESSING BLOCK FOR AN OPL (EXTENDED LEVEL) PACKAGING INTRODUCTION PACKAGING SYSTEM OVERVIEW FUNCTIONAL FRAMEWORK PACKAGING CONCEPTS DISTRIBUTION PACKAGE INTRODUCTION DISTRIBUTION PACKAGE GENERAL PACKING FOR TRANSPORT DISTRIBUTION VOLUME General STANDARDS ANNEX A - EXAMPLE WORKFLOWS B - COMPOSITION PLAYLIST (CPL) EXAMPLE B1 CPL SCHEMA B2 COMPOSITION PLAYLIST (CPL) EXAMPLE C - OUTPUT PROFILE LIST XML EXAMPLES C1 - OPL OUTLINE IN PSEUDO-XML (EXTENDED LEVEL) C2 - A SIMPLE OPL (BASIC LEVEL) C3 - A COMPLEX (LEVEL1) OPL (EXTENDED LEVEL) C4 - A COMPLEX (LEVEL2) OPL (EXTENDED LEVEL) C5 - A PRE-PROCESSING SECTION FOR AN OPL (EXTENDED LEVEL)
17 GLOSSARY OF TERMS
18 Table of Figures Figure 1 - Example Theatrical Workflow...25 Figure 2 - Example IMF System Architecture...26 Figure 3 - Example Wrapping...27 Figure 4 - Example IMF Composition...28 Figure 5 - Example IMF Hierarchical Structure...29 Figure 6 - Example IMP...30 Figure 7 - Example of an Image Container vs. 2.39:1 Active Image Area...33 Figure 8 - Example 1: Simple cuts from one area to another...55 Figure 9 - Example 2: Pan from one area to another...56 Figure 10 - Example 3: Change in Output Image size...57 Figure 11 - Example 4: Squeezing or Scaling Shots...57 Figure 12 - Example Track File Structure...59 Figure 13 - Example of KLV Coding...60 Figure 14. Composition Playlist Timing Figure 15. Sequence Timing Figure 16. Segment Timeline Figure 17. Resource Timeline...67 Figure 18. Sample Monoscopic Composition Playlist. Only the first and last bytes of UUIDs are represented Figure 19. Sample Stereoscopic Composition Playlist. Only the first and last bytes of UUIDs are represented Figure 20. Composition Playlist Element Structure Figure 21. Locale Element Structure Figure 22 - Sequence Structure...76 Figure 23 - Output Profile List Relationship Overview
19 Figure 24 - Future State - Mastering & Distribution Servicing Figure 25 - Example IMF Workflow Table of Tables Table 1: IMF Elements...31 Table 2: Basic Level Image Containers and Frame Rates...34 Table 3: Extended Level Image Containers and Frame Rates...35 Table 4: Image Metadata Data Elements...38 Table 5: Allowable Samples/Frame for Specified Frame Rates*...41 Table 6 - Audio Metadata Data Elements...44 Table 7 - Pan and Scan Metadata Data Elements...54 Table 8 Metadata Fields common to Event 0000 and Table 9 - PS Event ID: 0000, Pan/Scan Event for Shot # Table 10 - PS Event ID: 0001, Pan/Scan Event for Shot # Table 11: XML Namespaces...63 Table 12: Terms and Definitions...65 Table 13: Examples of Content Kind...71 Table 14: Example Ratings (Informative)...75 Table 15: Examples of Marker Labels...80 Table 16: XML Namespaces...90 Table 17: Examples of CompressionType Elements...96 Table 18: Examples of PictureCoding Elements...96 Table 19: Examples of ProfileType elements...96 Table 20: Examples of LevelType Elements...97 Table 21: Examples of Label Elements
20 Table 22: Glossary of Terms
21 1.1 Introduction 1 IMF Introduction With the advance of technology within the motion picture post-production industry, a paradigm shift is upon us as we move from the videotape workflow to the file-based workflow. It is with this shift that a need has arisen for a standardized set of specifications for this file-based workflow. In light of this shift, an organization was sought out by content creators to provide the facility for detailed discussions surrounding this topic. The Entertainment Technology Center (ETC) stood out as a leading candidate for such discussions and in November of 2008 the Interoperable Master Format or IMF specification effort was initiated. 1.2 Scope The IMF file-based workflow is designed to replace the existing tape-based Distribution Servicing workflow. It will store one master set of file-based elements to be assembled for any downstream distribution using multiple Composition PlayLists (Recipes), similar to what is used in present day Digital Cinema Packaging (DCP). The broad concept of a high quality, uniform IMF should lower costs, improve time-to-market, and increase interoperability of existing production processes and needs. This specification is intended to promote interoperability and faster implementation of multiple variants on a common video package, while retaining highest quality and appropriate security of the material. It is envisioned that this Final Specification would then be presented to SMPTE (Society of Motion Picture and Television Engineers) to create standards, recommended practices and engineering guidelines to implement this specification throughout the industry as a common interchange method. 1.3 Basic and Extended Levels In order to accelerate the adoption of this specification, it has been conceptualized into two levels. The first is the Basic Level, which will be less complicated to implement at the time of this publication. This document also identifies additional functionality, which is beyond the scope of the Basic Level. These are described as Extended Level, which may be realized as one or more extensions to the Basic Level standard Throughout this document you will see certain items and elements identified as either Basic or Extended. If there is no reference to Basic or Extended then an item or element should be assumed to be Basic. 1.4 System Objectives At the onset of writing a specification for an Interoperable Master Format, ETC acknowledged certain fundamental requirements, which are: The Interoperable Master Format (IMF) shall provide for a single set of master files and recipes to allow for easy creation of versions for distribution channels. This should also provide the potential to repurpose existing content. The IMF shall have the capability to present Essence and Metadata that is equal to or better than what one could achieve with current practice (i.e., Videotape). The IMF shall provide mechanisms (i.e., asset management, packing lists) for the exchange of IMF metadata or packages between facilities. This system should be based upon international standards so that content can be interchanged anywhere in the world as can be done today with videotape. These standards should be open published industry standards that are widely accepted and codified by regional and international standards bodies such as: ANSI, SMPTE, ITU, W3C, and ISO/IEC. The system specification and formats should be chosen so that the capital equipment and operational costs are reasonable and exploit, as much as possible, the economies of scale associated with equipment and technology in use in other industries. The hardware and software used in the system should be easily upgraded as advances in technology are made. Upgrades to the format should be designed in a way so that content may be 21
22 distributed and compatibly exchanged on the latest IMF-compliant hardware and software, as well as earlier adopted IMF-compliant equipment installations. The Interoperable Master Format shall be based upon a component architecture (e.g., Mastering, Compression, Encryption, Distribution, Storage, Playback) that allows for the components to be replaced or upgraded in the future without the replacement of the complete system. It is the intention of this Digital Video specification to allow for advances in technology and the economics of technology advancement. IMF Extended Level(s) must be backward compatible with all functionality in IMF Basic. As technology advances, further extensions in functionality may be amended to the specification, resulting in additional Extended Levels. Each new level must be backward compatible with all previous extensions and the Basic Level. The Interoperable Master Format shall provide a reliability and availability that is equal to, or better than, current practice (i.e., Videotape). The Interoperable Master Format shall allow for the use of a security method. The system should provide a means to allow the content to use standardized encryption with private/public keys. The IMF should also allow the use of forensic marking of the content for providing traceable forensic evidence in the case of theft. 1.5 Document Language This document consists of normative text and optional informative text. Normative text is text that describes the elements of the design that are indispensable or contains the conformance language keywords: shall, should or may. Informative text is text that is potentially helpful to the user, but not indispensable and can be removed, changed or added editorially without affecting interoperability. Informative text does not contain any conformance keywords. All text in the document is, by default, normative except: any section titled Introduction, any section explicitly labeled as Informative, or individual paragraphs that start with the word Note. Normative references are those external documents referenced in normative text and are indispensable to the user. Informative, or bibliographic, references are those references made from informative text or are otherwise not indispensable to the user. The keywords shall and shall not indicate requirements that must be strictly followed in order to conform to the document and from which no deviation is permitted. The keywords should and should not indicate that among several possibilities one is recommended as particularly suitable, without mentioning or excluding others; or that a certain course of action is preferred but not necessarily required. In the negative form, a certain possibility or course of action is deprecated but not prohibited. The keywords may and need not indicate a course of action permissible within the limits of the document. The keyword reserved indicates that a condition is not defined and shall have no meaning. However, it may be defined in the future. The keyword forbidden is the same as reserved, except that the condition shall never be defined in the future. A compliant implementation is one that includes all mandatory provisions ( shall ) and, if implemented, all recommended provisions ( should ) as described. A compliant implementation need not implement optional provisions ( may ). Requirements are indicated with the key phrases is required to, is encouraged to and can which represent shall, should and may (had the text been in a separate requirements document). This is necessary in order to distinguish requirements from the specification conformance language. The following keywords are italicized: shall, shall not, should, should not, required, not required, encouraged, and not encouraged. The names of standards publications and protocols are placed in [bracketed text]. International and industry standards contain provisions, which, through reference in this text, constitute provisions of this specification. At the time of publication, the editions indicated were valid. These referenced standards are subject to revision, and parties to agreements based upon this specification are encouraged to investigate the possibility of applying the most recent editions of the referenced standards. Section 0 is a glossary of technical terms and 22
23 acronyms used throughout this specification. The reader is encouraged to refer to the glossary for any unfamiliar terms and acronyms. Trademarked names are the property of their respective owners. 23
24 2.1 Document Framework 2 SYSTEM OVERVIEW The document defines technical specifications and requirements for the creation and distribution of an Interoperable Master Format. The details are in the following sections: Essence: This section provides specifications for the image and audio essence and its specific inherent metadata. The image essence specification defines a common set of image structures by specifying parameters such as image container, colorimetry and, if required, a mezzanine compression structure for the Interoperable Master Format (IMF). The Audio Essence specifies audio characteristics such as; bit depth, sample rate, minimum channel count, channel mapping and reference levels. Data Essence: This section provides specifications for the subtitle (timed text and sub pictures) and captions data essence. The Subtitles Essence specifies the format of a Digital Video subtitle track file. A subtitle file contains a set of instructions for placing rendered text or graphical overlays at precise locations on distinct groups of motion picture frames. A caption file should provide graphical overlays or provide graphical information to a secondary system for display of text. Outside of providing to a secondary system most other parameters are the same as subtitle data essence. Dynamic Metadata: This section provides the specifications for the metadata that is outside of the associated metadata of the essence or data essence. This would be metadata such as time code, pan and scan, and color transforms. Wrapping: This section defines the requirements for wrapping the content and metadata (image, audio and subtitle) files using (where possible) existing Material exchange Format (MXF) specifications. The output of this process is the Track Files. This section also defines the requirements for encrypting the essence (sound, picture and subtitles) of the IMF if required. Composition: This section provides the specifications for the creation of XML using Composition PlayLists (CPL), which are scripts that link the IMF Track Files together into synchronized pieces of content. This section will also touch upon the security requirements for a composition and Composition PlayList. Output Profile List: This section describes a set of information that would be used in conjunction with a CPL to specify particular content provider output preferences. In a typical workflow, it would point to a specific CPL within an IMF and instruct a transcoder or playout device to manipulate the content per those provider output preferences. Packaging: Interoperable Master Format Compositions can be packaged for distribution, which is detailed in this section. Provides requirements for all the tools necessary for editorial functions in a typical post-production environment. Annex: This section contains example workflows using the IMF as well as definitions and suggested devices for editorial systems, transcoding, and playout systems Basic and Extended Level References Throughout this document there are elements of the specification that are categorized as either Basic Level or Extended Level. This is done intentionally to distinguish the difference to the reader of what will be included in a Basic IMF structure and what will be added to extend the Basic Level. This designation will be specified within each chapter where it is pertinent. Where it is not designated then it is assumed to be a part of the Basic Level specification. 24
25 2.2 Overview The goal of the IMF project is to establish an integrated hardware and software solution for the processing, storage, and management of digital content intended for downstream distribution to business that will consume such content IMF System Architecture Figure 1, below, demonstrates an example of the end-to-end data flow from Production to Distribution. The focus of IMF is within the Distribution Servicing phase. Distribution Mechanisms Figure 1 - Example Theatrical Workflow To further demonstrate the concept of the Interoperable Master Format in a system architecture for distribution servicing refer to Figure 2, below. The IMF aggregates image, audio, and other files to support processing for any downstream delivery requirement. It does this using Composition PlayLists to create the deliverables using transcoding technologies. The IMF package shall allow for delivery using the image elements stored in either a compressed or uncompressed file. An example of this would be an HD resolution with 709 color space at 24 FPS converted to SD resolution with 601 color space at FPS. 25
26 2.2.2 Major System Key Concepts Figure 2 - Example IMF System Architecture The following concepts listed below are the Key Concepts of the IMF as a system. These concepts are derived from work done by content creators to create masters for the downstream distribution servicing of theatrical content. This is not to say that these concepts or the IMF should not be applied to other forms of content. Instead it is noted here so that the reader may better understand its origin Business-to-Business Solution The IMF is intended to facilitate internal or a business-to-business relationship and is not intended to be delivered to the consumer directly Digital Source Master (DSM) The IMF is derived from a wide range of sources with a wide range of technical levels. One example is content in its finished state at the end of the theatrical post-production process, which is shown above in Figure 2, as a Digital Intermediate, or also known as a Digital Source Master (DSM). The DSM may also be used to convert to a film duplication master and/or a master for archival purposes. It is not the intention of this document to, in any way, specify the DSM. This is left to the discretion of the content provider Archive not in Scope It is also not the intention of this specification to design the IMF as an archiveable master. One could conceive this as another use of the IMF; however, this is not part of the scope of this specification. 26
27 2.2.6 File/Frame-Based System The IMF is built upon a data file-based design, i.e., all of the content is made up of data stored in files. These files are organized around the image frames, which means that all of the synchronization references to the image frames and frame rate. The file is the most basic component of the system Essence and Data Essence The raw image and audio files of the IMF make up what is known as Essence. Also included as part of the IMF shall be subtitle or caption essence. This type of essence is called Data Essence and because of its nature it shall require different specifications and hence is described in its own section Metadata Metadata is essentially data about data. There are many examples of metadata: Supporting metadata Descriptive metadata Dynamic metadata Supporting and Descriptive metadata are defined by SMPTE. A new concept proposed for the IMF is Dynamic metadata. This metadata is metadata that changes on a frame basis such as time code, pan and scan, and color transforms. This metadata may be applied to the underlying essence. It shall be synchronized to the essence and therefore with these requirements shall be wrapped and contained as Track Files Security The IMF should not preclude the use of encryption or the use of forensic marks. This should occur at the wrapping, composition or packaging stage Wrapping and Track File Creation Wrapping is the process of encapsulating the essence, data essence and dynamic metadata files into well understood temporal units, called Track Files, using a standardized wrapping method, such as MXF. Figure 3, below, shows a basic example of this process of wrapping Essence and Metadata into Track Files. This process includes a way to uniquely identify each Track File along with its associated essence and metadata. It also provides a method to identify synchronization locations within each of the Track Files. Figure 3 - Example Wrapping 27
28 Composition A Composition represents a complete set of files that may be a feature, an episode, a trailer, an advertisement or any other single piece of content. A composition minimally consists of a Composition PlayList (CPL) and one or more Track Files. Composition PlayLists (CPL s) are XML structured textual lists that define how elements of an IMF are to be assembled and either transcoded to another format or played out as a presentation Versions Figure 4 - Example IMF Composition One of the main reasons for having an IMF is the ability to create many versions of a program without duplicating the common essence used for each version. Instead, one Composition PlayList (CPL) is required for each version of the program, which is much smaller in size as compared to the Track Files themselves. Essence that is specific to the desired version is created and supplied along with a new CPL, which in conjunction with the common essence results in the desired version Sequence Within the Composition PlayList (CPL) one should create sequences where the Track Files are nested within these sequences. The CPL is organized in such a way that, finding and possibly replacing sequences becomes more efficient depending on the number of Track Files. This configuration was selected to give the content creator more choices for the structure of the Composition PlayList. In the IMF, a sequence represents a conceptual period of time having a specific duration chosen by the content provider. For example, a Sequence may be the same length as a reel or it may be the running time of a television program between commercials. Again, this length is determined by the content provider to fit both the particular workflow and content type. Once created, Sequences can then be electronically spliced together within the CPL to create a complete presentation. An example below (see Figure 5) shows the hierarchical structure of the CPL. A sequence shall have one or more Track Files nested within it. The IMF allows cuts only to occur between Track Files as well as allowing a minimum duration of a single image frame/field. 28
29 Output Profile List Figure 5 - Example IMF Hierarchical Structure Once versions of a Composition are created, many different distribution formats can be made from that particular version. In order to assist with the automation of this process, one could use a standard method to define the transcoder or playout device s output. The IMF method of communicating the desired output is called the Output Profile List or OPL. The OPL is a textual list that contains the specification of the output. It is linked to the CPL by using the UUID of the CPL. This allows one to link many OPL s to a single CPL and automate the transcoding of the Composition into multiple distribution formats Packages An Interoperable Master Package (IMP) shall minimally consist of a Composition, an Output Profile List, an Asset Map and Packing List. An Asset Map is a text document that describes how the files are distributed across the physical media that is used for transport. A Packing List is a text document that provides a list of all of the files included in that specific IMP. An example of an IMP is shown below, in Figure 6. 29
30 Figure 6 - Example IMP 30
31 2.3 IMF Elements and Processes The following table provides a list of the identified IMF elements and processes Some of these elements may be in Extended Level(s) the designation as Basic or Extended will be specified in the sections in which the elements are defined. Element 1. Image 2. Audio Table 1: IMF Elements 3. Primary Display Subtitles / Captions 4. Composition PlayList (CPL) Files 5. Output Profile List (OPL) 6. Packing List 7. Asset Map 8. Time Code 9. Secondary Display Captions/Subtitles 10. QC / Picture Reports and Fact Sheets 11. Forensic Marking 12. Pan and Scan (Aspect Ratio Conversion) 13. Time Compression/ Expansion 14. Color Transform 15. HI (Hearing Impaired) 16. VI (Visually Impaired) 31
32 3 Essence 3.1 Overview Introduction Essence files are the core files that contain the image and audio data. One of the goals of the IMF is to allow for a variety of these files to be included into the IMF. For example, the IMF will support multiple image resolutions (e.g., 1920x1080 and 1280x720) Essence System Overview For the purpose of documenting the specific requirements and specifications for Essence, it is helpful to divide the system into a set of components. The specifications and requirements for each of these components will be described in the following sections: Image The image specification and file format o o Compression Requirements Structural Metadata Audio The audio specification and file format o Structural Metadata 3.2 Essence Fundamental Requirements Common Essence File Formats The Essence file types shall use a common standardized file format for each element (image and audio). The image essence file format shall be a standard-conformant file based on existing ISO or SMPTE standards. The audio track file format shall be based on Recommendation ITU-R BR (2007) version of Broadcast Wave. Essence file formats shall also use standardized structure metadata to optimize interchange. An MXF-conformant file, based on existing SMPTE standards, shall be used for wrapping each type of element (Image, Audio, Time Code, Dynamic Metadata, etc.) Frame Rates and Synchronization The frame rate within any of any individual IMF essence file must remain constant. Metadata must be carried in the image and audio data file format to indicate the frame rate. Frame Rates and Synchronization are covered in more detail here in this section and also in Sections 4 Data Essence, 5, Dynamic Metadata and 7, The Composition. 32
33 3.3 Image Specification Image Container, Active Image, Pixel Aspect Ratio The Image Container is defined as the full canvas of the image area, parts of which are not necessarily meant to be seen. Image Container is defined here as a rectangular array of pixels that contains the maximum possible image area in a given format. In contrast, the Active Image area is a subset of the Image Container and is defined as the area that the content provider considers active picture. Active Image contains the active image content only, but may contain letterboxing mattes or side mattes if the content provider wants to include them as active image. The Active Image area metadata tags are designed to be informative. They are not intended to be specifically acted upon by either the CPL or OPL. In addition, downstream transcoding devices shall not use this metadata to crop the image. The Active Image area shall not exceed the size of the Image Container. Because the Active Image area may be smaller than the Image Container, the Active Image area needs to be defined by a horizontal and vertical position within the Image Container. In order to describe the location and size of the Active Image area with the least amount of information, the Active Image area shall be expressed as two x/y coordinate values: Top Left Coordinate and Bottom Right Coordinate. For example, the Active Image of content that is shown in the 2.39:1 aspect ratio in the highest resolution of HD (Active Image Width of 1920 pixels) would have an Active Vertical size of 803 (1920/2.39). Because the vertical size is an odd number, the placement of the Active Image cannot be centered within the 1920x1080 Image Container; however, by using coordinates to describe the position, the exact location within the Image Container can be made, as shown in Figure 7. The Top Left of the Image Container is always at coordinate (0,0).. Table 2: Basic Level Image Containers and Frame Rates and Table 3: Extended Level Image Containers and Frame Rates denote the image containers and frame rates for Basic and Extended Levels pixels (0,0) (0, 137) Active Image Area 803 pixels 1080 pixels (1919, 940) (1919, 1079) Image Container Active Image 2.39:1 1920x x803 Figure 7 - Example of an Image Container vs. 2.39:1 Active Image Area The IMF Image Container is expressed in the metadata (see section 6.5.3). Note: The IMF does not specify any hard constraints for horizontal or vertical resolution. 33
34 Raster Format (Interlaced or Progressive) and Frame Rate are dependent on each other and to some extent, on the resolution of the image. The IMF must support the same Raster Formats and Frame Rates that are currently used in mastering today including those listed in the following SMPTE specifications: SMPTE 259M SMPTE 296M SMPTE 274M SMPTE (Extended Level) Because the file-based world affords some extra flexibility in regards to the mixing of resolution, frame rate and raster formats, particularly with progressive frame rate material, the IMF shall support the resolutions and frame rates listed in Table 2: Basic Level Image Containers and Frame Rates, and Table 3: Extended Level Image Containers and Frame Rates, in addition to the above SMPTE standards. Table 2: Basic Level Image Containers and Frame Rates System Level 1 (Basic) Image Container support up to: Frame/Field Rates (Hz)* x , 60/1.001, x /1.001, 24.00, 25.00, 30/1.001, x /1.001, 24.00, 25.00, 30/1.001, x , 60/1.001, x , 60/1.001, x , 60/1.001, Color Encoding Color Encoding Ratio Raster Format Stereoscopic Support Y,C B,C R 4:2:2 Interlaced 1 Yes Y,C B,C R 4:2:2 Progressive Yes R,G,B 4:4:4 Progressive No Y,C B,C R 4:2:2 Progressive No Y,C B,C R 4:2:2 Progressive Yes R,G,B 4:4:4 Progressive Yes 1 There may need to be additional metadata defined for Interlaced Raster Formats. Because this is a field rate, additional data elements including field order (bottom or top field first) and possibly repeated fields for content with 3:2 pulldown should be needed to thoroughly describe the Interlaced content. 34
35 Table 3: Extended Level Image Containers and Frame Rates System Level 3 (Extended) Image Container support up to: Frame Rates (Hz)* Color Encoding Color Encoding Ratio Raster Format Stereoscopic Support x , 25.00, 48.00, 50.00, x , 25.00, 48.00, 50.00, x , 48.00, 50.00, x , 48.00, 50.00, x , 25.00, 30.00, 50.00, R,G,B 4:4:4 Progressive Yes X,Y,Z 4:4:4 Progressive Yes R,G,B 4:4:4 Progressive Yes X,Y,Z 4:4:4 Progressive Yes Y,C B,C R 4:2:2 Progressive Yes * Frame/field rate per eye for stereoscopic content. Display may be at a different rate Pixel Aspect Ratio The Pixel Aspect Ratio (PAR) defines the shape of the pixel used by the image in a ratio of width versus height. Square PAR is the same as a ratio of 1:1. Depending on the standard used, the PAR can differ (examples include the non-square PAR for NTSC and PAL). Both the Image Container and the Active Image shall share the same Pixel Aspect Ratio Bit Depth In regards to Bit Depth, Basic Level IMF shall support 8-bit and 10-bit image content and Extended Level IMF shall support 12-bit and 16-bit content Chroma Subsampling For Y C b C r, the Basic Level IMF shall support 4:4:4 (no chroma subsampling) and 4:2:2 chroma subsampling types. Current mastering standards use both of these ratios; therefore, the IMF shall include both of these in the format Color Space, Color Space Gamuts The Basic Level IMF shall include the Color Space and Color Space Gamuts that are currently used in mastering, which include: R G B ITU-R BT.709 Y C b C r ITU-R BT.709 Y C b C r ITU-R BT.601 The Extended Level IMF shall support: XYZ standard CIE 15:2004, Colorimetry Additional Extended Color Gamuts as needed 35
36 3.3.6 Stereoscopic Content The Basic Level IMF shall support stereoscopic content. The image essence parameters for each eye shall match the parameters for monoscopic image essence in the IMF. For stereoscopic content, the left eye content shall be encapsulated in a single track file, and the right eye content shall be encapsulated in a single and separate track file Compression Requirements The IMF shall support image essence formats in uncompressed, lossy compressed and lossless compressed forms. The compressed data essence shall support all properties of the image essence format (e.g. bit depth, resolution) as described in this document. The following section describes requirements for compressed image essence. Lossy image compression typically involves significant reduction in image data rates at the expense of some loss of image quality. If the loss of image quality is so small that the compressed image is perceptually equivalent to the original uncompressed image, then image compression is often described as visually lossless since there is no visual difference between the compressed and uncompressed image. The classification of a lossy image compression method as visually lossless is dependent on viewing distance, viewing environment, display type, display post-processing settings, the visual acuity and sensitivity of the viewer, and many other factors. For these reasons, it is not required that lossy compression used in the IMF be visually lossless because the viewing conditions of an IMF are unspecified and will instead defined by the use of each individual IMF system implementation. The data rate of lossy image compression is typically described as either constant bitrate or variable bitrate. Constant bitrate image compression assigns the same number of bytes per image for all images in the composition. This results in a fixed data rate but an image quality that is dependent on the complexity of the image, generally lower complexity images will be represented with higher image quality. Variable bitrate image compression assigns a variable amount of bytes to each image. A variable bitrate image compression will typically assign more bytes to image with more complexity and/or detail, while assigning fewer bytes to less complex images. Lossless image compression is a name given to a class of image compression techniques that only reduce the data rate of the image information and do not reduce the image quality. The lossless class of image compression is also known as mathematically lossless to distinguish itself from visually lossless lossy compression. Lossless image compression typically results in a variable bitrate, as the bitrate is dependent on the image complexity Image Compression Requirements The following section defines the requirements for an Image Compression format and structure. The Lossless and Lossy compression schemes for Basic Level IMF shall meet the following requirements: Shall be an Industry Standard (i.e., SMPTE, ITU, etc.) Shall support Intra-Frame Shall support MXF wrapping Shall support Variable and Constant Bit Rates Shall support Spatial Resolution Scalability The compression scheme shall use documented industry standards in order to ensure consistent interoperability between system implementations and to prevent conflicts with intellectual property. In order to encourage adoption of the IMF by a large number of participants, the compression scheme should either be license-free or available on reasonable and non-discriminatory terms Intra-Frame Compression Intra-Frame compression is an important part of the image requirements because it allows for each image frame to be self-contained thereby allowing edits to occur in one-frame increments 36
37 without any required additional decoding. Both variable and constant bit rates should be used in the compression scheme in order to allow the content owner to balance quality of the image versus overall data size Spatial Resolution Scalability The compression scheme shall allow for different spatial resolution levels within the same frame. Note: This allows one set of files to contain multiple resolutions of the image at various image quality levels. For example, the compressed HD image with a resolution of 1920x1080 shall allow extractions of lesser resolutions such as 960x540 (half of the original width and height) and 480x270 (one-quarter of the original width and height), without having to decode the full image resolution and then scale the image width and height to the smaller resolutions. This method of extraction allows for smaller proxy versions of the files to be used without having to generate a separate file for a system that requires a smaller resolution or image quality level. Smaller proxies are used in many situations including editing and as reference files for audio conforming and subtitling creation Image Compression Codecs At the time of this writing, a lossy and lossless CODEC that meets all of the above image and compression requirements is JPEG2000 Part 1; however, the IMF may use any CODEC that meets the above image and compression requirements. At a minimum, the IMF shall support the JPEG2000 Part 1 (ISO.IEC ) codec, including the following profiles: JPEG2000 Part 1 Amd 4 - Broadcast Profile Single-Tile Profile Level-1 shall be supported for standard definition content. JPEG2000 Part 1 Amd 4 - Broadcast Profile Single-Tile Profile Level-2 thru Level-4 shall be supported for high definition content. JPEG2000 Part 1 Amd 1 2K Digital Cinema Profile shall be supported for 2K Digital Cinema content. (Extended Level) JPEG2000 Part 1 Amd 1 4K Digital Cinema Profile shall be supported for 4K Digital Cinema content. (Extended Level) Uncompressed Requirements Uncompressed image formats shall be supported in Basic Level IMF, and shall be able to be wrapped into MXF. The IMF shall support the following uncompressed formats: Generic Container (SMPTE 384M-2005) DPX (SMPTE 268M-2003) Note: At the time of this writing, the Generic Container and DPX meet the uncompressed image requirements Structural Metadata Image Metadata Required Fields Specific image metadata shall be required for each image track, including stereoscopic images, that describes the native parameters of the image content, in order to allow for proper interchange between different implementation systems. At the time of this writing, however, the actual metadata fields cannot be defined. Instead, the data elements shown in Table 4 shall be the minimum amount of information supported by the IMF for image. These data elements shall be converted into specific metadata fields once the specification is complete. 37
38 Table 4: Image Metadata Data Elements Data Element Data Element Definition Examples Stereoscopic Active Image Top Left Start Coordinate (x,y) Active Image Bottom Right End Coordinate (x,y) Code Value Range CODEC Color Channel Bit Depth Color Encoding Color Encoding Ratio Color Primaries Coordinate Origin Frame Rate HANC Start of Active Image within an Image Container area expressed in an (x,y) coordinate value that is placed in relation to the Image Container Top Left Coordinate of (0,0). End of Active Image within Image Container area expressed in an (x,y) coordinate value that is placed in relation to the Image Container Top Left Coordinate of (0,0). Range used to represent zero black and 100% white. Coder Decoder for a digital stream of data. Number of bits used to represent the digital image data. Type of color model used for the image expressed in a set of components. Number of samples used per color space component expressed in a ratio. xy chroma coordinates of the tristimulus values. Coordinates that define the upper left corner of the Active Image within the Image Container. Rate that each image is shown; used in conjunction with Raster Format to express fields/second or frames/second. Horizontal Ancillary Data - Ancillary packets located in the horizontal blanking region. Varies examples include (0,0) and (0,239) Varies examples include (1919,1079) and (1679, 1079) Full-Range (0-1023), Limited- Range (64-940) JPEG 2000 Part 1 8, 10, 12, 16 RGB, YC b C r, XYZ 4:4:4, 4:2:2 Rxy,Gxy,Bxy Top-left, bottomright Frame Rates are specified in the Essence section, Ref: Embedded Audio Image Container Horizontal Pixels Total number of horizontal pixels used for the Image Container. 4096, 2048, 1920, 720, could be many 38
39 Data Element Data Element Definition Examples Stereoscopic Image Container Vertical Pixels Mastering Luminance (Optional) Mono Playback Pixel Aspect Ratio Raster Format Stereoscopic ID Transfer Function VANC Floating Window Total number of vertical pixels used for the Image Container. Optional metadata item that provides the reference luminance value used in the mastering environment. Enables or disables the ability to playback one eye out of a stereoscopic pair - also gives control to the content owner to not allow one eye to be played out. Shape of the pixel expressed in a ratio of width divided by height of the pixel. Note that the PAR is assumed to be the same for both the Image Container and the Active Picture. Interlaced (fields/second) or Progressive (frames/second). Identifies the Track as either a monoscopic or stereoscopic Track and if stereoscopic, identifies which Track is which eye. Relationship of code value to brightness value. Vertical Ancillary Data - Ancillary packets located in the vertical blanking region. Indicates the presence of floating windows for stereoscopic content. 3112, 2160, 1556, 1080, 576, 486, 480, could be many 14fL, 35fL Allowed, Not Allowed 1:1, could be any ratio Interlaced, Progressive Mono, Stereo_Left, Stereo_Right Linear, log, power function Closed Caption data and VPID Present Not Present Stereoscopic Stereoscopic Stereoscopic Mastering Screen Size Indicates the diagonal screen size that the content was mastered on. 42 or 2m Stereoscopic 2D Extraction Method If stereoscopic tracks may be viewed as a single track, this identifies the best eye track (right or left) to use. None if there is no best eye. Prohibited means that 2D extraction is not allowed Right, Left, None, Prohibited Stereoscopic 39
40 3.4 Audio Specification Most of what is detailed in this section shall be included in Basic Level. Areas of Audio that shall be in Extended Level are explicitly stated as such in this section Audio File Format Audio Essence Track Files represent audio within the IMF. An audio essence track file shall be a single, complete audio element, which may be any soundfield configuration supported in IMF (see section ). An audio essence track file would typically be wrapped or interleaved into a single file, and shall be limited to a single audio element (such as a composite mix or dialog track) and a single soundfield configuration (such as 5.1 or mono) per audio track file. Audio essence track files shall be further constrained per sections through Data rate encoded audio shall not be used (e.g., AC3, DTS-MA, Dolby E). Matrix encoded audio (e.g. Lt-Rt, Dolby EX) may be used. Note: Discrete inputs to IMF authoring include (informative): Broadcast Wave, Wave, PCM Sampling Rate Sampling rates shall include k, k, k, and k Frame Rate/Audio Speed Frame rates shall include Native Speed and Pulldown Speed frame rates as follows: Native Speed: 24.00, 25.00, 30.00, and fps. Note, these are equivalent audio speeds, as the image being represented is the same. Pulldown Speed: , and fps. Note, these are equivalent audio speeds, as the image being represented is the same Allowable Samples per Frame Allowable samples per frame shall be based on frame rate, according to the following table. The required sample (word) clocking rate stated in the column header shall cause the audio to sync to the frame speed of the associated image. 40
41 Table 5: Allowable Samples/Frame for Specified Frame Rates* Frame Rate kHz 24/ n/a 2000 n/a n/a 1920 n/a / / / n/a 1600 n/a n/a 960 n/a / / / n/a 800 n/a n/a 1000 n/a Audio Bit Depth The audio bit depth shall be 24 bits. 16 and 20 bit are not allowed - they shall be padded in the least significant bits to create 24 bit. Note: Metadata should be included to reflect the source file bit depth Audio Track File Content Informative Note: IMF differs conceptually from D-Cinema in the way audio is handled. In D-Cinema, one audio track file is playable at a time, and that audio track file can contain multiple audio elements 1 Not to be confused with stereoscopic frame rates (see 3.2.2, above) 2 Not to be confused with stereoscopic frame rates (see 3.2.2, above) 3 If the frame rate is not an integer multiple of the audio sample rate, then the number of samples in each frame length shall vary such that the correct aggregate number of samples are maintained per the coding theory delineated in SMPTE 382M, Section 6.2. In practice this is aggregated over a 5 frame period. Therefore, for these cases, the audio editorial granularity is restricted to once every 5 frames. See CPL requirement 7.2.7, Audio Editing Granularity. 4 Not to be confused with stereoscopic frame rates (see 3.2.2, above) 41
42 within its 16 allowable channels. In IMF, multiple audio track files are playable at a time, and each audio track file contains only one audio element with only one soundfield configuration within its 16 allowable channels Audio Track File Element Constraint There shall be one audio element per audio track file. Multiple audio elements shall not be combined in a single audio track file. The following audio elements are typical examples of what may be carried in the IMF. Printmaster (reels) Composite Mix (full length or parts) Music+Effects (one track, two tracks or full multitrack) M+E optional material (one track, two tracks or full multitrack) Narration VI (Visually Impaired) HI (Hearing Impaired) SAP Dialog (one track, two tracks, or full multitrack) Often, this is part of a DME split track Music (one track, two tracks, or full multitrack) Often, this is part of a DME split track Effects (one track, two tracks, or full multitrack) Often, this is part of a DME split track Audio Track File Language Constraint There shall be only one audio language per audio track file. The audio language is named by the primary language spoken in the track Track File Loudspeaker Channel Content Constraint A given loudspeaker channel shall only be represented once per track file (i.e., within a track file containing multiple audio channels); there shall be a one-to-one relationship between loudspeakers and audio channels. There shall not be multiple instances of a loudspeaker channel within a track file Channels Per Track File Constraint An audio track file shall not exceed 16 audio channels Simultaneous Multiple Audio Track File Availability and Playout The CPL in an IMF shall be capable of pointing to multiple audio track files, which are simultaneously available for use by a transcoder or real-time playout device. IMF shall support simultaneous multiple audio track file playout, as multiple audio elements may be available to play out at once Multiple Audio Track File Routing When accessing multiple track files, the track file individual channels shall be combined (like channel to like channel) for output routing purposes on a unity gain basis. For example, the left channel of audio track file A and the left channel of audio track file B are combined at unity gain as the overall left channel content of the IMF. Multiple track file like channels shall not be combined at other than unity gain. Different loudspeaker channels shall not be combined (e.g., Left and Right are never combined to make a mono output). 42
43 Mixing of Audio Channels Individual audio channels within a track file shall not be mixed. Audio channels between multiple track files shall not be mixed, other than the routing spec stated above. Different channels shall not be combined (downmixed) to make a narrower soundfield configuration Soundfield Configuration Constraints There shall be only one soundfield configuration per track file. Multiple soundfield configurations shall not be combined in a single audio track file. The soundfield configuration shall be described in the metadata that is carried with the audio track file. A registry of IMF soundfield configurations shall be created in conjunction with SMPTE 30MR and shall be used for this purpose. The following soundfield configurations shall be supported in IMF: Note: Additional soundfield configurations must be registered in order to be supported L/R (Lo-Ro) Standard Stereo Lt-Rt Mono LCR LCRS 5.1EX 6.1 (discrete) 6.0 (discrete) 7.1 (L, C, R, Lss, Rss, Lrs, Rrs, LFE) Channel Layout and Mapping Basic Level Until such time as the MXF MCA Labeling Standard is ratified, IMF audio track files shall have a defined channel layout for a given soundfield configuration, which is stated in the metadata. For example, 5.1 (6 channel) would always be L, R, C, Sub, Ls, Rs. Note: These layouts should be defined in the SMPTE audio ad hoc group. Extended Level When the MXF MCA Labeling Standard is ratified, IMF will need to register UL s for Audio Channel, Soundfield Group and Group of Soundfield Group labels for the IMF application. Once these are registered, IMF shall support these labels. IMF Extended Level Compliant devices will be required to read and write these labels. The recognition of Soundfield Configurations and automated routing of audio channels shall be implemented in Extended Level Audio Metadata Data Elements Specific audio metadata shall be required for each audio track, that describes the native parameters of the audio content, in order to allow for proper interchange between different implementation systems. At the time of this writing, however, the actual metadata fields cannot be defined. Instead, the data elements shown in Table 6 - Audio Metadata Data Elements, shall be the minimum amount of information supported by the IMF for audio. These data elements shall be converted into specific metadata fields once the specification is complete. Audio Metadata elements are given in the following table. 43
44 Table 6 - Audio Metadata Data Elements Audio Data Element Data Element Definition Examples Essence Audio File Type Audio file type of the essence contained in the track file. 2 s Complement PCM Track Audio Type Sample Rate Frame Rate Samples/Frame Source File Bit Depth Language Audio Element Type Audio Content Associated Audio Track Files Y/N Absence of this value assumes No The type of wrapper that is conveying the essence. Number of audio samples in a second. The audio speed as expressed by the frame rate of the associated picture file. The number of audio samples in the duration of one frame of the associated picture file. The number of bits in the audio word of the source file. Note that in the IMF itself this is always 24 bit, but it is important to know if this is the source file bit depth or if it has been padded. The primary spoken language of the audio essence. The type of audio contained in the audio essence. An additional modifier to describe the content of the audio contained in the audio essence. Yes indicates that this track file is part of a group of track files that are to be played out simultaneously. MXF 48K, 96K 2000, 2002, 4000, bit, 20 bit, 24 bit English, Italian Partial DME split track, printmaster, music and effects Dialog, Visually Impaired, SAP Yes or No 44
45 Audio Data Element Data Element Definition Examples Associated Track File Information This information is required only if the Associated Audio Track Files data is set to Y. This indicates which other audio tracks in the IMF are associated with this track file. Music, Effects (e.g. these are other track files that may be associated with a Dialog track file) Soundfield Configuration Channel Layout Basic Level Only, not required in Extended Level. Native Speed Y/N (Required) Speed/Processing (If applicable). Required if Native Speed=NO. This indicates the intended loudspeaker configuration. This is the order of the channel samples and their associated channel labels that are contained in the track file. If NO, look to speed/processing and pitch correction fields. Indicates if the audio has been processed in order to attain a different speed than its native recorded speed L, R, C, LFE, Ls, Rs 1 Yes or No Varispeed, Time Compression 25/ (This is 25 fps, but is sped up from the native speed of to 25 fps. The resultant audio is 4.1% faster and higher in pitch than the original captured audio. Metadata should indicate it is 25 fps sped up 4.1% using Varispeed (or time compression) Pitch Correction Y/N Absence of this value assumes No Loudness (Number) Loudness Standard/Method Indicates that the pitch of the audio has been corrected to its original pitch after speed processing. This is the loudness value, as measured by the Loudness Standard/Method. The standards document or method used to measure the loudness of the program Yes or No -25 ITU 1770, LKFS, Dialog Level 45
46 Audio Data Element Data Element Definition Examples Loudness Range (LRA) Range of loudness level throughout the program measured using an integrated technique 10LU, 20LU True Peak (Number) The loudness value at the peak point in the program. -6dbFS 1 Per work being performed in SMPTE 31FS
47 4 Data Essence 4.1 Overview Introduction This section provides requirements for the subtitle and closed caption data essence. The area pertaining to subtitle requirements provides information about the format of a digital video subtitle track file. The area pertaining to closed captioning provides information about the format of timed text data contained within a digital video file/signal. A subtitle file contains a set of instructions for placing rendered text or graphical overlays at precise locations on distinct image frames. A caption file should provide graphical overlays or provide graphical information to a secondary system for display of text. Outside of providing instructions to a secondary system most other parameters are the same as subtitle data essence. Data Essence information listed in this section shall be included in Basic Level Data Essence System Overview Data Essence within the IMF is intended to enable the repurposing of subtitle and caption data to create various output formats as specified in the Output Profile List. There are a multitude of subtitle and caption formats in use within the motion picture and television industry (many of which are proprietary in nature) which inhibit interoperability between systems. In this regard it is desired to utilize a single common file format to store both subtitle and caption data within the IMF. The Data Essence common file format can be either (i) generated natively, or (ii) derived from existing subtitle/caption source data (e.g. CEA 708 captions, Digital Cinema subtitles). An Output Profile List will then indicate the desired output format to be generated from the common file format stored within the IMF Major Data Essence Concepts and Definitions For the purpose of documenting the requirements and specifications for IMF Data Essence, it is helpful to divide the system into a set of components. The specifications and requirements for each of these components will be described in the following sections: For the purposes of the IMF, data essence can be one of the following: Rendered in a specified font (Timed Text) and overlaid by the system Pre-rendered PNG bitmaps (sub picture) Subtitles Textual representation of the audio track, usually just the dialog and usually in a language other than the audio track dialog, intended for foreign language audience. A subtitle track file contains a set of metadata and a set of subtitle structures that encode the content and temporal/spatial locations of subtitles to be displayed over a primary image. It is understood that this data shall be output as a file and that the data in that file shall remain related to associated video but that the information is not included as part of a video image file Captions Generated data associated with video and intended for optional decode by consumer display devices that are equipped with caption decoders. The resulting text information, generated by the decoding unit, is displayed at specified times during the playback of an image file. Captions are understood to be a textual representation of the audio track, usually including all sounds, and usually in the same language as the audio track dialog, intended for hearing impaired audiences. For the purpose of the IMF we are specifically referencing timed text formats conforming to the CEA-608 (CEA-608-E ANSI 47
48 CEA Line 21 Data Services Specification) and CEA-708 (CEA-708-D DTV Closed Captioning Specification) specifications, which will be identified further below Closed Captions that may be displayed or not displayed depending on user preference. Not all viewers can see the text. The playback device shall be activated in order for the text to be visible. Assumes the viewer cannot hear the program audio and therefore all pertinent audio information is described. Delivered to consumers as part of the video signal and decoded by the display device Open Text is visible to all viewers and cannot be removed or turned off. Text that is burned in to the video or delivered in a way that cannot be deactivated. Delivered as part of the video Data Essence Fundamental Requirements Common File Formats The Essence and Data Essence is required to use a common standardized file format for each element (image, audio, subtitles, etc.). The image essence file format is required to be a SMPTE-conformant file based on existing SMPTE standards such as SMPTE-TT which is described in SMPTE ST :2010. The audio essence file format is required to be based on Broadcast Wave. The Subtitle essence should be based on PNG and XML file formats Frame Rates The image structure is required to support all the frame rates listed in both section 3, which details the Image Essence and section 8, which details the Output Profile List. The frame rate of any individual IMF source master is required to remain constant. Metadata is carried in the image data file format to indicate the frame rate Synchronization Files within the image and/or audio essence are required to carry information to provide for framebased synchronization between each file. Both the timed text and subtitle functions are required to synchronize with the image file at any point. It is required that an IMF device establish correct location and synchronous playback while taking into account frame rate and editing decisions listed in the OPL Sub Picture (Pre-Rendered Open Text and/or Graphics) Description A sub picture data stream is a multiple-image data stream intended for the transport of supplemental visual data to a pre-existing digital image. The data is designed for graphic overlay of the main image and for output to a file format specified by the OPL (see Section 8). It should be designated as open display and/or closed display depending on the output format specified. The sub picture data stream, when employed, shall typically be used for the transport of subtitle data File Format Sub picture data is required to be encoded as a standardized, XML-based document. Such a standard is required to define both timed text and sub picture encoding methods allowing mixed-media rendering. Sub picture frames are required to be encoded as [ISO/IEC 15948:2004] PNG files Composition PNG image files shall have three (3) 8-bit color components (R, G, and B). An 8-bit alpha channel may be present. If an alpha channel is present, the decoder shall use it when creating a composite image. Regardless of the headers present in a PNG image file, the image shall be interpreted as though the 48
49 "srgb" chunk is present, having a "rendering intent" value of 3 (three) ( absolute colorimetric, see [ISO/IEC 15948:2004]). The width and height (in pixels) of a subpicture shall be equal to or less than the width and height, respectively, of the associated main picture. When creating the composite image, the sub-picture shall appear over the main image. A sub-picture's pixels shall be considered equal in size to the those of the main image; each pixel of the sub-picture shall be composed onto exactly one pixel of the main image, i.e., no resizing shall take place during the composite operation. Some legacy formats are encoded as image data and no original text version is available. In addition, they may contain logos or other non-text presentations, which cannot be readily converted into text. This requires some kind of image processing techniques, such as optical character recognition, to recover the text information. This process might not be practical or might be lossy especially if the font information is variable or unknown. While tunneling of the original data is possible as a complete stream, in some instances a hybrid approach is desirable where the image data is used in the TTML presentation and the recovered text is made available computationally but not rendered Text Color Interpretation Text color values encoded in the XML shall be interpreted as srgb values [IEC :1999] Frame Rate and Timing The XML navigation file specifies the temporal resolution of the sub picture file. A Frame count, Time In, Time Out, Fade Up Time and Fade Down Time, which correspond to the image, shall be included. The sub picture frame rate shall be equal to the frame rate of the associated image Essence and shall be modified by the system to match whatever output is specified by the OPL Synchronization The equipment or system that encodes or decodes the sub picture file is required to ensure that temporal transitions within the sub picture file are correctly synchronized with associated output files Timed Text (Presentation of Text in Sync with Audio and Video) Description Timed Text is text information that is displayed at specified times during the playback of an image file File Format For the IMF, Timed Text data would, ideally, be encoded as a standardized, XML-based document that could then be input into an IMF system in an identical manner to Digital Cinema subtitle data. It is understood, however, that previously released content, as well as broadcast content, should have timed text available in a multitude of formats and at lower resolutions. Therefore, in addition to an XML-based document, it is the intent of the IMF to accept timed text data into the system in formats conforming to the CEA-608 and CEA-708 specifications. This would include many of the pre-existing closed caption file formats, the inclusion of caption data in an existing video signal from tape and/or capture of the caption data from an existing video signal. It is understood that this shall limit the available resolution of text ingested in this manner but shall provide flexibility for all downstream resolutions and output formats Default Fonts Font files are required to be used to render Timed Text for subtitle applications. Font files should be used to render Timed Text for subtitle and/or caption applications. When used, font files are required to conform to [ISO/IEC :2009 OpenType]. Timed Text files are required to be accompanied by all font files required for reproduction of the Timed Text. The Timed Text file format is required to 49
50 support a default character set. It is required that there be a default Unicode character set and a default font for that character set. In the event that an external font file is missing or damaged, the subtitle rendering device is required to use a default font supplied by the manufacturer. The default character set is required to be a Unicode ISO Latin-1 character set. The default font is required to conform to [ISO/IEC :2009 OpenType] and support the ISO Latin-1 character set Identification The Timed Text format requires the cardinal language of the text to be identified Searchability A pure text stream is encouraged to isolate content from rendering markup for searchability Multiple Captions The Timed Text format shall allow the display of multiple captions simultaneously. There shall be a maximum number of 3 lines of text allowed for simultaneous display. Note: This allows for spatial representation for captions when two people are talking simultaneously Synchronization The equipment or system that encodes or decodes the Timed Text file is required to ensure that temporal transitions within the data stream are correctly synchronized with associated data streams. The timed text function is required to synchronize with the image at any point. It is required to establish correct location and synchronous playback while taking into account frame rate and editing decisions listed in the OPL Stereoscopic Offset The system shall support a method to provide Stereoscopic offset data for the rendering of stereoscopic subtitles and captions. This method should be clearly understood by all rendering engines supporting IMF content. One possible method of accomplishing this would be the use of Dynamic Metadata to contain the Stereoscopic offset data Data Essence Specification The IMF shall be compliant with the Timed Text Format specified in SMPTE ST :2010 and W3C Timed Text Markup Language (TTML) 1.0, W3C Recommendation 18 November 2010 for its Data Essence. W3C Timed Text Markup Language (TTML)1.0: 50
51 5 Dynamic Metadata 5.1 Overview Introduction Dynamic Metadata is metadata that changes on a periodic basis. In most of the cases described below, that are applicable to the IMF, this usually occurs on a frame basis. That is to say that there is the potential for metadata to change every frame. There are two envisioned uses, at this time, for Dynamic Metadata. One is Time Code and the other is Pan and Scan information. These are discussed in further detail below. Other types of Dynamic Metadata that can be conceived include Color Correction information and 3D-related information but will not be discussed here. Dynamic Metadata shall be included in Extended Level IMF and should not be included in Basic Level IMF Dynamic Metadata System Overview For the purpose of documenting the specific requirements and specifications for Dynamic Metadata, it is helpful to divide the system into a set of components. The specifications and requirements for each of these components will be described in the following sections: Stored Time Code The specification and file format for time code dynamic metadata, which is a separate Track File and outside of the Essence header metadata. Pan and Scan The specification and file format for Image pan and scan dynamic metadata Major Dynamic Metadata Concepts Dynamic Metadata shall be used to provide a frame accurate reference to specific metadata. This metadata shall then be used to identify a frame or location and/or provide data to a device that would manipulate the essence. Dynamic Metadata shall be designed in such a way that it should be wrapped into a Dynamic Metadata Track File Dynamic Metadata Fundamental Requirements Common File Formats The dynamic metadata shall use a common standardized file format for each element. See also Common Essence File Formats. 5.2 Stored Time Code Requirements In addition to the synthetic timecode signal that can be generated from the Composition Playlist timeline and metadata (see [CPL timecode sections]), the IMF shall support the carriage of timecode information as follows: In all cases, As header metadata in essence (audio, image, etc.) Track Files - this method need not preserve users bits, binary groups or binary group flags. In separate Dynamic Metadata Timecode Track Files - this method shall preserve binary groups or binary group flags. This method shall also support timecode frame rates that are different from the frame rate of associated image essence. It shall also support multiple time code tracks, e.g. Keycode, Audio Time Code, Foot and Frames, Time of Day code, etc. Stored Timecode shall allow continuous time code across a complete Composition. Stored Timecode shall support part for part time code. (Ex. Hour 1 Part 1, Hour 2 Part 2) The purpose of Stored Time Code in the IMF is manifold: 51
52 It may be used to carry historic information from the source It allows synchronization and identification of essence frames with downstream processes and devices that are not aware of the Composition Playlist timeline. 5.3 Pan and Scan Specification The pan and scan metadata track shall contain the information to allow for a pan and scan version of the image. Instead of storing multiple versions of the image to accommodate different aspect ratios of the feature (for example, 2.39:1, 1.78:1 and a 1.33:1), this feature allows for the IMF to contain one version of image and along with pan and scan metadata, create multiple aspect ratio versions of the image track Pan and Scan Requirements The pan and scan metadata track shall contain information derived from pan and scan composition equipment/software in a standardized format. There are two specific types of pan and scan information that will be discussed here: Pixel-for-Pixel Pan and Scan Metadata Spatial Pan and Scan Metadata Pixel-for-Pixel Pan and Scan Metadata includes coordinate information for changing the composition of an image but does not include any resizing (zooming in or out) of the image. Instead, the total image area has a certain raster and the pan and scan information moves around this raster, leaving the size of each pixel the same or pixel-for-pixel. This data includes basic coordinate information that allows for simple movements in the picture without the need to factor in any filters for zooming or re-scaling of the image. Spatial Pan and Scan Metadata includes both coordinate information and scaling information for changing the composition of an image. With spatial changes, the area depicted by the pan and scan information does not have to stay pixel-for-pixel to the original total image area. Instead, the pan and scan area could be smaller or larger than the final output size, causing a re-scaling of the image in the pan and scan area. Due to the re-scaling, filters need to be factored in, in order to properly create the re-size without causing too many image artifacts such as aliasing or ringing. The format shall include the basic, common capabilities of pan and scan in an image for Pixel-for-Pixel pan and scan including: Displaying only certain areas of an image Tilt (up and down movement) Pan (left and right movement) In addition to the above, the format should include basic Spatial pan and scan information including: Zoom in Zoom out/scale Horizontal squeeze/stretch Vertical squeeze/stretch SMPTE ST defines how to handle all of the pan and scan types listed above. Rotate and Flip/Flop (horizontally/vertically) have been intentionally excluded from the dynamic pan and scan metadata. Letterboxing, side-matting and windowboxing may be covered by SMPTE ST using AFD codes instead of Pan and Scan metadata and it should be discussed during the standardization process whether these types of image conversions should be handled via pan and scan metadata or AFD codes Time Code and Movement Identification The pan and scan metadata track shall be used in conjunction with either the time code metadata track or synchronized in the CPL in order to create the proper movements from one area of the image to another area of the image at specific time codes. Any movement from one area to another area of the image shall be depicted as Pan Scan Events that occur at specific frame counts. For simple cut 52
53 changes, the pan-scan size and location shall be described at specific frame counts numbers. The movement could be either a constant, linear movement or a dynamically-changing, non-linear movement; however, movements shall not be defined only the resulting changes in image size and location per frame count Source Image, Output Image and Viewport In order to output the pan and scan image information properly, the source image area and the final output image area shall be defined. The source image shall hold the entire canvas that is to be panned and scanned over and the Pixel Aspect Ratio of the source image is taken into consideration for the proper pan and scan image output. Each set of pan and scan metadata shall contain a specific output image that defines the final destination image area and size for the pan and scan output. In addition to the source image and the final output image, a specific Pan/Scan Image Area needs to be defined. This area, also known as the viewport, specifies the location of the actual pan and scan image information in regards to the source image. The viewport can move around, but the final output image area shall determine the final destination size and pixel aspect ratio that will be output. Other parameters in the pan and scan metadata could cause the viewport to be altered such as changing the overall aspect ratio or the size of the image to be displayed, but the output image shall always determine the final output size. The definitions of output image and viewport are similar to those found in SMPTE The definition of source image is slightly different because for IMF, the source image may be the full Image Container (not only the Output Image area). If pan and scan metadata exists, the output image area for the pan and scan metadata shall supersede the global Output Image area set for the IMF for the particular output of the pan and scan version. This allows an IMF to contain more than one set of pan and scan metadata. Other types of metadata should be included such as fields that would define certain types of scaling/re-size filters and dithering that should be used during image re-sizing. At the time of this writing, more investigation is needed to determine if universal scaling and dithering filters should be included in the pan and scan metadata. The output image settings and/or Output Profile List (OPL) shall determine the overall scaling of the images during the output stage of the IMF. The output image area created by the pan and scan metadata shall be scaled to fit the output resolution depending on the OPL parameters. For example, a 1.78 pan and scan could be output as a 4x3 letterboxed 1.78 if the OPL specified the settings to create the 4x3 letterbox from the 1.78 pan and scan metadata. If no OPL exists, then the output of the Pan and Scan Metadata Track shall be the output image size Pan and Scan Metadata Required Fields In order to define the areas of the image that should be shown and to create any pan and scan movements, specific metadata is required to accurately identify the pan and scan information. At the time of this writing, however, the actual metadata fields cannot be defined. Instead, the data elements shown Table 7 - Pan and Scan Metadata Data Elements shall be the minimum amount of information supported by the IMF for pan and scan metadata. These data elements shall be converted into specific metadata fields once the specification is complete. Some of these data elements have sub-element fields to help define the overall data element. Note that SMPTE defines some of these fields in the Data Set Words, so redundant fields should be minimized during the standardization process where possible. Output Image Width and Height are described in this document (instead of using only an Output Image Aspect Ratio as used in SMPTE ) in order to allow for flexibility of various aspect ratios; however, if possible, Width and Height could be combined into the Aspect Ratio if Width and Height are determined not to be needed. 53
54 Table 7 - Pan and Scan Metadata Data Elements Data Element Data Element Definition Examples Output Image Aspect Ratio Fill Color (Fill Image) Output Image Width Output Image Height Output Image Pixel Aspect Ratio Pan/Scan Event ID Frame count Horizontal Viewport Offset Vertical Viewport Offset The fixed aspect ratio for the final output In cases where the Output Image is smaller than the output size, this field determines the color that should be used to fill. Total number of horizontal pixels used for the final Output Image area Total number of vertical pixels used for the final Output Image area Shape of the pixel expressed in a ratio of width divided by height of the pixel, specifically for the final Output Image area Identifies the event within the list of pan and scan events. Sub-element of Pan/Scan Event ID, frame count of where event is located. Sub-element of Pan/Scan Event ID, pan value that specifies the location of the center of the viewport with respect to the center of the source image, in source pixels Sub-element of Pan/Scan Event ID, tilt value that specifies the location of the center of the viewport with respect to the center of the source image, in source pixels 2.40, 2.39, 2.35, 2.20, 1.78, 1.33 Format could be R-G-B values dependent on bit depth of IMF 1920, , 576, 486, 480 1:1, could be any ratio Numeric, starting with = no pan SMPTE defines /16 to Positive values are to the right, negative values are to the left; IMF may need more values 0 = no tilt SMPTE defines /16 to Positive values are to the right, negative values are to the left; IMF may need more values 54
55 Data Element Data Element Definition Examples Zoom Horizontal Coefficient Zoom Vertical Coefficient Sub-element of Pan/Scan Event ID, specifies the width of the viewport, in pixels of the source image Sub-element of Pan/Scan Event ID, specifies the height of the viewport, in pixels of the source image. 1920, , 576 Examples of how the metadata fields work together to create the pan and scan image areas and movements are shown below. Figure 8 - Example 1: Simple cuts from one area to another In the above example, two people are having a conversation. The Source Image/full aperture image is 4096x3112 (with a square PAR for simplicity), and the red boxes show the Output Image area that should be displayed to create a 1.33 version of the image. The image in Shot #1 would be displayed starting at a specific time code, while one person is talking and then cut to the image in Shot #2 when the other person starts talking. This is an example of pixel-for-pixel pan and scan metadata. The two new viewport centers are in reference to the source image center. A possible scenario of the pan and scan metadata is shown below: Table 8 Metadata Fields common to Event 0000 and 0001 Metadata Field Value Comments Output Image AR 1.33 Fill Color Assumes 10-bit RGB content Output Image H 1440 Final output of PS meant for Output Image W x1080 Output Image PAR 1:1 Square pixel 55
56 Table 9 - PS Event ID: 0000, Pan/Scan Event for Shot #1 Metadata Field Value Comment Frame Count H Viewport Left of source center V Viewport -389 Tilt up from source center H Zoom n/a Do not use for this case V Zoom n/a Do not use for this case Table 10 - PS Event ID: 0001, Pan/Scan Event for Shot #2 Metadata Field Value Comment Frame Count H Viewport 1328 Right of source center V Viewport -389 Tilt up from source center H Zoom n/a Do not use for this case V Zoom n/a Do not use for this case Figure 9 - Example 2: Pan from one area to another In the above example, two people are having a conversation. The Source Image/full aperture image is 4096x3112 (with a square PAR), and the red boxes show the Output Image area that should be displayed to create a 1.33 version of the image. The image in Shot #1 would be displayed starting at a specific time code, while one person is talking and then pan and tilt over to the image in Shot #2 when the other person starts talking. The pan could either be linear or non-linear in speed. Example 2 is similar to Example 1, but instead of only having two Pan/Scan events, there would be a separate Pan/Scan Event for each subsequent frame as the red box area changes along the pan. During each time code frame, the viewpoint center of the red box would change according to where the image should be based on whether the pan is a constant, linear speed or a dynamic, non-linear pan. This example is an example of Pixel-for-Pixel pan and scan metadata. 56
57 1 2 Figure 10 - Example 3: Change in Output Image size In this example, Shot #1 starts out with a smaller Output Image area that slowly zooms out to show the entire area in Shot #2. The zoom out could be linear or non-linear in speed. Similar to example 2, there would be a separate Pan/Scan Event for each subsequent frame as the red box area changes along the zoom out. During each time code frame, the viewpoint center of the red box would change according to where the image should be based on whether the zoom out is a constant, linear speed or a dynamic, non-linear zoom out. Because the image sizes are different for each shot s Active Area, during the output of the IMP, the images would be scaled to the final output resolution. This could result in Shot #1 being scaled up to the final resolution or Shot #2 being scaled down to the final resolution. An OPL could be created to determine the parameters of how this re-scaling could be done. This example is an example of Spatial pan and scan metadata. Figure 11 - Example 4: Squeezing or Scaling Shots In this example, the original image is 1920x1080 and the shot contains a 1.78 main title that needs to be modified in order to display in a 1.33 version. However, after the main titles and credits, the rest of the feature will be full-frame This example shows two ways to make widescreen credits fit into a 1.33 aspect ratio: Squeeze the credit horizontally to fit into the 4x3 area Letterbox the credit and keep the same aspect ratio 57
58 In either case, feature content could either cut back to a full-screen 4x3 image. In the letterbox case, the letterboxing could slowly scroll outwards to reveal a full-screen 4x3 image by zooming in on the credit. All of these examples are examples of Spatial pan and scan metadata and shall be accommodated in the IMF standard. One possible solution for the Squeeze example would be to use pan and scan metadata with the Horizontal and Vertical Zoom Coefficients. For the Letterbox example, AFD might be a possible way to implement the top and bottom mattes along with a Fill Color of (assuming the content is 10-bit RGB). 58
59 6 Wrapping 6.1 Overview Introduction The IMF, as stated in the System Overview, is a collection of files, such as picture essence files and audio track files. These files, as they stand by themselves, do not represent a complete presentation. Synchronization tools, asset management tools, metadata, content protection and other information are required for a complete presentation to be understood and played back as it was intended. This is especially important when the files become compressed and/or encrypted and are no longer recognizable as image essence or audio essence in this state. See Frame Rates and Synchronization. Wrapping is a way to organize and collect this material in such a way as to make it suitable for storage and movement to its destination. In seeking a common interchange standard for digital video between facilities and equipment, it is understood that there should be multiple sources of content. This shall require special consideration to achieve IMF interchange. Thus, an interchange wrapping structure is needed that operates across several domains. This section identifies specific AS-DCP specifications for wrapping IMF data. At the time of this writing, the AS-DCP wrapping documents comprise the only complete set of essence wrappings of which the editors are aware that conform to the goals of IMF. It is intended however that IMF will use instead the emerging AS-02 wrapping. Because that wrapping is not yet published, the editors have elected to leave the AS-DCP references in place because they continue to represent the model envisioned for IMF. It is the intention of the editors that the AS-DCP standards will be replaced in IMF by AS-02 as soon as those documents are available. Most of what is detailed in this section shall be included in Basic Level. Areas of Wrapping that shall be in Extended Level are explicitly stated as such in this section. 6.2 Wrapping Requirements Introduction The Audio, Image and Text Track Files are the lowest-level components in the IMF system. The Material exchange Format (MXF, SMPTE ST 377:2009) defines the common structure of the various types of files used in IMF to wrap audio, image and timed-text essence. MXF defines a variety of abstract essence container types for storing Essence, Data Essence or Dynamic Metadata information. An MXF file consists of three logical parts: the File Header, the File Body and the File Footer as shown in Figure 12, below. Figure 12 - Example Track File Structure The file structure is further broken down into logical data items as defined in [SMPTE 336M Data Encoding Protocol using Key-Length-Value]. The KLV Coding Protocol is composed of a Universal Label (UL) identification Key (UL Key), followed by a numeric Length (Value Length), and followed by the data Value as shown in Figure 13, below. One or more of these data items are combined to form the logical parts shown above. 59
60 UL Key Length Length Value 16 bytes BER Variable Length Octets Format Information Figure 13 - Example of KLV Coding Each Track File shall be a self-contained element, such that its Essence and Metadata can be understood and presented as it was packaged by a compliant decoder (i.e., no information outside the file is needed to fully decode the file). The information shall be located in the file as specified by the wrapping standard for the respective essence type Metadata In addition to the metadata items defined by the MXF file specifications referenced below, it shall be necessary to develop descriptors (MXF Data Sets) that allow version and title information to be embedded in MXF files Synchronization Each Track File shall contain Edit Rate and Index metadata. Synchronization of two or more track files is defined by the Composition Playlist (CPL). See Frame Rates and Synchronization Splicing Track Files, of the same Essence type, are required to allow for seamless splicing at frame boundaries to create a continuous data stream for a presentation. The playback system shall be able to perform sample accurate splicing at frame boundaries of audio track files Security IMF Track File formats shall support encryption and integrity checking Extensibility MXF inherently supports future extensions by allowing the decoder to ignore unrecognized KLV packets Simple Essence A track file shall contain essence of a single Essence type (e.g., audio, image, subtitles). While a Track File should, for instance, contain all audio channels for a given soundfield, additional Soundfields are required to be stored in separate track file. The Composition PlayList shall select the correct Track Files to play a requested version of the composition. 6.3 MXF Track File Encryption (Extended Level) Introduction Track File encryption provides additional security for files transported or stored in an un-secure environment. A symmetric-key cipher is employed for processing efficiency. A Key management scheme is required to provide a way to send symmetric keys securely to an intended recipient. Key management is not addressed in this document. Track file encryption shall be supported in Extended Level. 60
61 6.3.2 Standards SMPTE ST 429-6:2006 defines a symmetric key encryption wrapper for KLV packets in an MXF file. 6.4 Image Track File Introduction An Image Track File contains the image Essence data and its associated Metadata. Each Image Track File may be compressed and/or encrypted. The following are requirements for an Image Track File Frame Boundaries The Image Track File shall contain all identifiers and parameters required for the decoder to recover the images from the file Compression The Track File shall support Constant Bit Rate (CBR) compression and Variable Bit Rate (VBR) compression, within the constraints of the specified code stream for the reference decoder Standards SMPTE specifications ST 422:2006 and ST 429-4:2006 define an MXF wrapping for JPEG 2000 images. The MXF wrapper is compatible with Track File encryption defined by SMPTE ST 429-6:2006. Additional standards shall be developed to provide for using the color spaces required by IMF. 6.5 Audio Track File Introduction An Audio Track File contains the audio Essence data and its associated Metadata. Audio Essence is stored in Audio Track Files as sample-interleaved 24-bit PCM. Each audio track file shall contain a single complete soundfield (i.e., all channels in the file shall be heard simultaneously by the listener) Standards SMPTE specification ST 382:2007 defines an MXF wrapping for sample-interleaved PCM audio data. The MXF wrapper is compatible with Track file encryption defined by SMPTE ST 429-6:2006. Note: No standard exists at this time to address the channel labeling requirements of IMF. A multichannel labeling framework is maturing in SMPTE TC 31FS, but additional work will be required to create the labeling structure required for IMF Metadata In addition to the Metadata defined by the ST 382:2007 wrapping, the following Metadata items shall be supported by the Audio Track File: Unique ID of corresponding plaintext track encrypted Channel Mapping Labels 6.6 Timed Text Track File Introduction A Timed Text Track File contains text Essence data, such as subtitles or captions, for display onscreen (over the main image) or potentially by an auxiliary display. Text essence is encoded as Unicode character strings with associated font resources, to be rendered by the decoder, or as PNG 61
62 images for direct display. Each Timed Text Track File should contain any combination of text, font references, and image references Standards SMPTE specifications ST 428-7:2007 and ST :2008 define XML file formats for timed text with fonts and PNG files. SMPTE ST 429-5:2009 defines an MXF wrapper for XML-based timed text and associated ancillary resources. The MXF wrapper is compatible with Track file encryption defined by SMPTE ST 429-6:2006. Note: Support for stereoscopic positioning is being developed by SMPTE TC 21DC. That work is expected to result in revisions to ST some time in Time Code Track Files (Extended Level) Introduction A Time Code Track File contains a complete SMPTE 12M timecode stream. The following are requirements for a Time Code Track File. Time Code Track Files shall be supported in Extended Level Frame Rate The Time Code track file shall support frame accurate timecode for frame rates greater than 30 frames per second, e.g. stereoscopic source material Faithful Capture The Time Code track file shall record as accurately as possible the source timecode signal, including starting timecode time addresses and frame-accurate representation of timecode discontinuities frame-accurate representation of user-bits and binary group flags of SMPTE 12M multiple timecode streams, e.g. KeyCode streams Flexible Output The Time Code track file shall allow output to LTC, VITC and ATC streams Frame Boundaries The Time Code Track File is encouraged to begin and end with complete frames that are associated with its Image Track File to allow for a clean transition between reels Metadata The following Metadata shall be furnished with the Time Code Track Files: Unique Identification Track Type (i.e., auxiliary) Frame Count Number Text Format (If Applicable) Cue Names (If Applicable) 62
63 7 The Composition 7.1 Introduction The Composition represents a complete self-contained digital media program. This Composition should be a feature, episode, trailer, advertisement or any other single piece of content. A Composition usually consists of at a minimum a Composition Playlist (CPL) and one or more Track Files, which contain the actual program essence. The Composition and CPL information in this section shall be included in Basic Level IMF. 7.2 Overall Requirements Synchronization The Composition Playlist format shall provide support for synchronization of the Essence and Metadata elements. See Frame Rates and Synchronization Human Readable Metadata Human readable metadata shall be in English (default) but can be provided in other languages as well Stereoscopic Content The Composition Playlist shall support mixed monoscopic and stereoscopic essence within a single Composition File Format The Composition shall be defined and represented using the Extensible Markup Language (XML) [XML 1.0], and specified using XML Schema [XML Schema Part 1: Structures] and [XML Schema Part 2: Datatypes]. This specification shall be associated with a unique XML namespace name [Namespaces in XML]. This namespace name conveys both structural and semantic version information, and serves the purpose of a traditional version number field. Table 11: XML Namespaces, below, lists the XML namespace names used in this specification. Namespace names are represented as Uniform Resource Identifier (URI) values [RFC 2396]. These values shall be considered as simple strings, and applications should not attempt to resolve them as URLs. Table 11: XML Namespaces Qualifier URI cpl xs ds 63
64 The namespace prefixes used in this document (cpl, xs, ds), are not normative values. Implementations shall perform correctly with any XML compliant namespace prefix value that is associated with a URI from Table 11: XML Namespaces, above. Datatypes from other schemas that are used in this document will be prefixed with the appropriate namespace qualifier (e.g. xs:datetime). See [XML Schema Part 2: Datatypes] and [XML-Signature Syntax and Processing] for further information about these types. A Composition Playlist document shall consist of XML document with single the CompositionPlaylist element of type CompositionPlaylistType at its root. It shall be encoded using the UTF-8 character encoding and its MIME type [IETF RFC 2046] shall be text/xml Audio Synchronization The Composition Playlist shall accommodate audio elements that do not have the same duration as the associated image element accommodate audio elements whose start and end point do not coincide with that of the associated image elements. allow a sequence of multiple audio elements to be synchronized with a sequence of one or more image elements This is to support the following audio partitions: continuous (full length), reels and parts. This also allows audio inserts of different length than the corresponding image insert to facilitate proper audio flow across edits Audio Editing Granularity The audio editing granularity shall have frame accuracy. An IMF authoring tool shall have granularity to the sample, except as stated in Fractional Sample Editorial Granularity Crossfades and Transitions There shall be no automatic or automated audio crossfades in the IMF CPL at playout or transcoding. Rendered audio shall be used to perform difficult crosses between content. Note: These would be created in the audio editing process and rendered in the authoring process. Audio fade up or fade outs shall not be supported. Only hard cuts ( butt splices ) shall be supported. 64
65 7.3 Terminology The following terms are used to describe the features of this specification. These terms and definitions are also in Table 22: Glossary of Terms, located in the Annex. Table 12: Terms and Definitions Term Composition Edit Rate Editable Unit Essence Frame Rate Intrinsic Duration Native End Point Native Start Point Playable Region Sample Rate Sequence Definition A complete artistic or informational motion picture work, such as a feature, episode, trailer, or an advertisement, etc. A number of Editable Units to be reproduced during a temporal interval having a duration of exactly one (1.0) second. Because Edit Rate values are not always integer values and sometimes require many digits of precision, Edit Rate values are expressed as a rational number (the ratio of two integers). The smallest temporal increment of access to Essence, e.g. a frame or a sample. The audio, image and data resources that ultimately are intended for a viewing and/or listening experience. The number of frames per second. Frame Rate values are expressed as a rational number (the ratio of two integers). The total number of Editable Units in a Track File. The last Editable Unit of a Track File. The first Editable Unit of a Track File. All Track Files are viewed by a Composition Playlist as a sequence of Editable Units numbered from 0 (zero). Consequently, the Editable Unit number of the Native Start Point of a Track File shall always be 0 (zero). The set of Editable Units within a Track File that is intended to be reproduced. A Track File may contain Editable Units before and/or after the Playable Region. The number of essence samples per second. Sample Rate values are expressed as a rational number (the ratio of two integers). A collection of Segments intended to be reproduced in parallel. 65
66 Term Track File Segment Resource Definition A file containing a single Essence, such as audio, image or subtitle essence. An ordered collection of Resources to be reproduced sequentially. Associates metadata or essence with a playable portion of the Composition Playlist timeline. A Resource may reference a Playable Region within a Track File. 7.4 Synchronization The Composition Playlist defines an idealized playback timeline for the complete Composition. As illustrated in Figure 14. Composition Playlist Timing., it consists of a list of Sequences that are reproduced sequentially and without gaps. Figure 14. Composition Playlist Timing. Each playable instance of the Composition is associated with a unique offset expressed in Edit Units, starting from 0 and extending to a value equal to the sum of durations of all sequences. Figure 15. Sequence Timing. As illustrated in Figure 16. Segment Timeline., each Sequence consists of a list of Segments that are played in parallel and have the same duration. In other words, all Segments within a Sequence start and end simultaneously. 66
67 Figure 16. Segment Timeline. As illustrated in Figure 17. Resource Timeline, each Segment consists of a list of Resources of the kind of essence that are played sequentially. For instance, an ImageSegment may contain MonoResources that are meant to be ultimately rendered to monoscopic image (video) essence. Resources may reference external Track Files, which contain image, audio, data, etc., or may store all their data within the Composition Playlist structure, e.g. Marker Segments. The duration of a Segment is the sum of the duration of its Resources. This does not mean that all underlying Track Files within a Sequence must have the same duration since Resources may reference only a portion of the underlying Track File and Segments may contain multiple Resources. To enable a continuous playback of essence across Sequences, each Segment contains a Track ID that links Segments across Sequence boundaries. The sequential playback of all Segments with the same Track ID creates a virtual track across the entire Composition. A Track ID may be used only once in each Sequence and, if present in one Sequence, it must be present in all Sequences. Figure 17. Resource Timeline The Playable Region may be reproduced multiple times using the RepeatCount element. As illustrated in Figure 17, a Resource identifies the portion of the underlying Track File to be reproduced. The Resource EntryPoint and SourceDuration parameters define the number of Edit Units within the Track File that is to be reproduced (the Playable Region). 67
68 Audio Track (TrackID= AF..7A) Timeline Audio Track (TrackID= AF..5F) Timeline Image Track Timeline Figure 18. Sample Monoscopic Composition Playlist. Only the first and last bytes of UUIDs are represented. Figure 18 depicts a sample Composition Playlist composed of 2 Sequences, each containing one monoscopic Image Segment and two Audio Segments. It illustrates the following features of the Composition Playlist. A Resource can reference a portion of the underlying Track File, i.e. the Track File can have handles. The same Track File can be referenced by multiple Resources, within and outside a particular Track. Audio Track File ID= is referenced by Image Segments ID= 35..4C and ID=AF..5F. Image Track File ID=AB..72 is referenced twice within the first Image Segment. A Track File can be repeated multiple times within a single Resource. This allows, for instance, gaps in Audio Tracks to be filled by repeating a Track File containing a few samples of silence. 68
69 Audio Track (TrackID= AF..5F) Timeline Image Track Timeline Figure 19. Sample Stereoscopic Composition Playlist. Only the first and last bytes of UUIDs are represented. The Composition Playlist illustrated in Figure 19 contains stereoscopic image essence. A single StereoResource contains two references, one to an Image Track File containing left eye information and the other to an Image Track File containing right eye information. 69
70 7.5 Composition Playlist Structure The Composition Playlist shall include the following elements unless stated as optional in which case they should be included. An example of a Composition Playlist is provided in Annex C. As noted, some of the elements of the Composition Playlist replicate information present in the underlying resources to allow quick access to the information. This information is provided for information purposes only, therefore the metadata in the underlying Resource shall always take precedence in case of ambiguity. Figure 20. Composition Playlist Element Structure. 70
71 7.5.1 Id The Id element uniquely identifies the Composition Playlist for asset management purposes. It shall not uniquely identify the content represented by the composition. It shall be encoded as a urn:uuid per [RFC 4122] Annotation [optional] The AnnotationText element shall be a free-form, human-readable annotation describing the composition. It is meant strictly as a display hint to the user. The optional language attribute is an xs:language language code and indicates the language used for the text. If the language attribute is not present, the default value en shall be used IssueDate The IssueDate element shall be used to define the time and date at which the Composition Playlist was issued. It should be displayed to the user. It shall be encoded as an xs:datetime Issuer [optional] The Issuer element shall be a free-form, human-readable annotation that shall identify the entity that created the Composition Playlist. It is meant strictly for display to the user. The Signer element defined in Section shall be used to identify the entity that digitally signed the Composition Playlist. The optional language attribute is an xs:language language code and indicates the text language of the content of the element. If the language attribute is not present, the default value en shall be used Creator [optional] The Creator element shall be a free-form, human-readable annotation that shall identify; the application used to create the Composition Playlist, the Facility that created the CPL and the operator that created the CPL. It is meant strictly for display to the user. The optional language attribute is an xs:language language code and indicates the text language of the content of the element. If the language attribute is not present, the default value en shall be used ContentTitleText The ContentTitleText element shall contain a human-readable title for the composition, e.g. The Jazz Singer. It is strictly meant as a display hint to the user. The optional language attribute is an xs:language language code and indicates the language of the content of the element. If the language attribute is not present, the default value en shall be used ContentKind [optional] The ContentKind element defines the kind of material referred to by the Composition Playlist. It is meant to be both human and machine-readable. Table 13: Examples of Content Kind, below, shows examples of Content Kind. Table 13: Examples of Content Kind Kind advertisement feature Description Content promoting a product or service other than an upcoming feature. A theatrical feature. 71
72 psa rating short teaser test trailer transitional Public service announcement. Slate/still image indicating the recommended age group permitted to view the content to follow. This rating is generally unique per country. Non advertising/promotional content (3 to 15 minutes) typically before a theatrical feature. Very short (typically less than 1 minute) content promoting an upcoming theatrical feature. Content used to test, calibrate or setup equipment. Short (2 to 3 minutes) content promoting an upcoming theatrical feature. Extremely short content (1 to 15 seconds) separating unrelated compositions ContentVersionList [optional] The ContentVersionList element contains a list of ContentVersion elements. The ContentVersion elements are synonyms and uniquely identify the version of the content referred to by the composition, as opposed to the Id element, which uniquely identifies an instance of the Composition Playlist. In other words, two distinct compositions that refer to the same content shall have distinct CompositionPlaylist Id values but should have the same ContentVersion elements. This should occur, for example, if a composition is distributed to supersede a previous version. Similarly, while two compositions may share the same content title, they should refer to two different versions, such as French (dubbed) and French (original), and therefore have distinct ContentVersion elements. The ContentVersionList element is meant to assist both users and software in scheduling and tracking content. Each ContentVersion element shall contain the following elements Id The Id element shall identify the content contained in the Composition Playlist. It shall be a valid URN, per [RFC 2141] LabelText The LabelText element shall be a human readable label, e.g. French (1.85 picture, 16.1 sound, dubbed), describing the content. The optional language attribute is an xs:language language code and indicates the text language of the content of the element. If the language attribute is not present, the default value en shall be used EssenceDescriptorList [optional] The EssenceDescriptorList element shall contain a list of EssenceDescriptor elements. Collectively these EssenceDescriptor elements describe the essence referred to by the composition. Specifically, when present, each EssenceDescriptor element shall match the descriptive information already present in one or more Resources, e.g. Track Files. This information is provided here in human readable form for informational purposes only. Machines should not rely on this information for transcoding or playout and in all cases the Track File metadata takes priority over 72
73 information provided here. The EssenceDescriptor elements are meant to assist both users and software in scheduling and tracking content. Each EssenceDescriptor element is identified by a unique UUID value that shall be referenced by one or more Resources. For example, if a composition contains both stereo and 5.1 audio, there would be at least two audio EssenceDescriptor elements, and each of the stereo and 5.1 Resources would contain a single UUID (see SourceEncoding element below) that references the respective descriptor. Each kind of Resource referenced by the Composition Playlist shall define the content of its corresponding EssenceDescriptor element and the process by which it is generated. For example, the EssenceDescriptor element for an MXF Image Track File may simply consist of an XML representation of its embedded MXF Essence Descriptor structures. Sample Essence Descriptors elements are provided as illustration in Section CompositionTimecode [optional] As described in Section 7.4, the Composition Playlist structure defines a unique timeline for the Composition and associates a unique offset expressed in Edit Units with every playable instant. While this method allows precise synchronization of essence within the Composition Playlist, it is not always appropriate for the synchronization of downstream devices and processes. The Composition Playlist therefore provides the information necessary to generate a unique timecode output [SMPTE ST 0012] from its timeline. The starting timecode output value shall be equal TimecodeStartAddress and shall correspond to time offset 0 within the Composition timeline. The timecode output shall increment by one frame for every Edit Unit played TimecodeDropFrame The TimecodeDropFrame element shall determine if the synthetic timecode output is Drop Frame or not Non-Drop Frame TimecodeInterlace The TimecodeInterlace element shall determine if the synthetic timecode output signals an interlace or progressive image output TimecodeRate The TimecodeRate element shall specify the nearest integer frames per second rate of the timecode output, e.g. 24, 30, TimecodeStartAddress The TimecodeStartAddress shall specify the value of the timecode output at the beginning of the Composition EditRate The EditRate element defines the Edit Rate of the Composition. It shall be in units of 1/seconds and represented as a Rational Number TotalRunningTime [optional] The TotalRunningTime element shall be used to define the complete running time of the Composition at the time when Composition Playlist was issued. It should be displayed to the user. It shall be encoded as an hours:minutes:seconds. 73
74 LocaleList [optional] The LocaleList element contains a list of one or more Locale elements. A Locale is a grouping of region or country code, rating, language and annotation elements that identifies the scope of the intended audience in terms of location, language and censorship authority. Each Locale element shall contain either a CountryList or RegionList element, but not both Annotation [optional] Figure 21. Locale Element Structure. The AnnotationText element shall be a free-form, human-readable annotation describing the Locale. It is meant strictly as a display hint to the user. The optional language attribute is an xs:language language code and indicates the language used for the text. If the language attribute is not present, the default value en shall be used LanguageList [optional] The LanguageList element shall contain a list of one or more Language elements and shall reflect the spoken or textual languages of the Locale. Each Language element shall be a RFC 5646 language code CountryList [optional] The CountryList element shall reflect the countries making up the Locale. It shall contain an ordered list of one or more Country elements. Each Country element shall use the ISO standard three-letter acronym for it designation RegionList [optional] The Region element shall reflect the Regions making up the Locale. It shall contain an ordered list of one or more Region elements. Each Region element shall be a human-readable string RatingList [optional] The RatingList element shall contain an ordered list of zero or more Rating elements containing ratings associated with the Locale. Each Rating element, shown in Table 14: Example Ratings (Informative), contains an Agency and a Label element. Each element is meant to be both human and machine-readable. There shall be only one Rating element per given Agency. 74
75 Agency Label The Agency element shall contain a URI [RFC 2396] that uniquely identifies the agency issuing the rating. The Label element shall contain a textual representation of the rating, which should be displayed to the user. For each issuing agency, and hence unique URI, there are a number of permissible Label values. The specification of this mapping is beyond the scope of this document. Table 14: Example Ratings (Informative) Agency Labels R, PG, PG-13, G, NC-17 G, 13+, 16+, SequenceList The SequenceList element shall contain an ordered list of Sequence elements that shall be reproduced in sequence, without gaps. The structure of the Sequence element shall be as defined in Section Signer [optional] The Signer element uniquely identifies the entity, and hence the public-private key pair, that digitally signed the Composition Playlist. It shall be an instance of the KeyInfoType type defined in [XML- Signature Syntax and Processing]. If the Signer element is present, then the Signature element shall also be present. If X.509 certificates are used per [XML-Signature Syntax and Processing], then the Signer element shall contain one X509Data element containing one X509IssuerSerial element, which uniquely identifies the certificate used to sign the Composition Playlist Signature [optional] The Signature element shall contain a digital signature authenticating the Composition Playlist. If the Signature element is present, then the Signer element ( above) shall also be present. The Signature element shall be an instance of the ds:signature element defined in [XML-Signature Syntax and Processing]. The digital signature shall be enveloped and apply to the entire Composition Playlist. An enveloped signature is one that is attached to the document being signed. The signature is generated by the signer, as identified by the Signer element, using the signer s private key. 7.6 Sequence Structure A Sequence specifies the playback in parallel of multiple Segments, each associated with a particular aspect of the presentation. While this specification defines a number of Segments, additional Segments may be added in the future. The structure is shown below in Figure
76 7.6.1 Id Figure 22 - Sequence Structure The Id element uniquely identifies the Sequence for asset management purposes. It shall be encoded as a urn:uuid per [RFC 4122] Annotation [optional] The AnnotationText element shall be a free-form, human-readable, text annotation associated with the Sequence. It is meant strictly as a display hint to the user. The optional language attribute is an xs:language language code and indicates the text language of the content of the element. If the language attribute is not present, the default value en shall be used SequenceIssueDate [optional] The SeqIssueDate element shall be used to define the time and date at which the Sequence was issued. It should be displayed to the user. It shall be encoded as an xs:datetime SegmentList The SegmentList element shall contain an ordered list of Segment elements derived from the datatype BaseSegmentType. All Segments shall be reproduced in parallel. Segments shall contain a TrackID element. Segments with equal TrackID belong to the same Track and shall be played sequentially and without gaps across Sequence boundaries. A TrackID shall be used only once in each Sequence and, if present in one Sequence, it shall be present in all Sequences. Applications may use TrackID, for instance, to select a particular audio track for output. Each Segment should be associated with a single aspect of the presentation and therefore a single kind of essence. This specification defines a number of Segments. As additional Segments are defined, the SegmentList element shall be extended according to Section SegmentType Id The Id element contains a UUID per [RFC 4122] that uniquely identifies the Segment. 76
77 7.6.7 TrackId [optional] The TrackId contains a UUID that uniquely identifies the Track to which the Segment belongs ResourceList The ResourceList element contains an ordered list of Resources to be reproduced sequentially ImageSegment [optional] The ImageSegment element defines image essence to be reproduced. The actual image essence is contained in an external Track File. A given ImageSegment shall contain Resources of type StereoImageTrackFileResourceType, in the case of stereoscopic image, or Resources of type TrackFileResourceType, in the case of monoscopic image. An given ImageSegment shall not however contain a combination of both. There shall be zero or one ImageSegment element per Sequence AudioSegment [optional] The AudioSegment element defines audio essence to be reproduced. The actual audio essence is contained in an external Track File. The AudioSegment element shall contain Resources of type TrackFileResourceType. There shall be zero or more AudioSegment elements present SubtitleSegment [optional] The SubtitleSegment element defines Subtitle essence to be reproduced in conjunction with the essence referenced by the ImageSegment. The actual Subtitle essence is contained in an external Track File. The SubtitleSegment element shall contain Resources of type TrackFileResourceType. There shall be zero or more SubtitleSegment elements present TimecodeSegment [optional] The TimecodeSegment element defines timecode essence associated with the Sequence. The actual timecode essence is contained in an external Track File. The TimecodeSegment element shall contain Resources of type TrackFileResourceType. There shall be zero or more TimecodeSegment elements present MarkerSegment [optional] The MarkerSegment element defines markers, e.g. FFOC, LFOC Markers shall be referenced from the start of the Sequence they are associated with. The MarkerSegment element shall contain Resources of type MarkerResourceType. There shall be zero or one MarkerSegment element per Sequence. The IntrinsicEditRate of all Resources within a MarkerSegment shall be equal to the EditRate of the Composition Playlist CaptionSegment [optional] The CaptionSegment element defines caption essence associated with the Sequence. The actual caption essence is contained in an external Track File. 77
78 The CaptionSegment element shall contain Resources of type TrackFileResourceType. There shall be zero or more CaptionSegment elements present Extension (defining new kinds of Segments) Extension elements shall be used to represent Segments types not defined in this document. Zero or more extension elements may be present in the SegmentList. When present, extension elements shall be located after any elements defined by this document. When present, extension elements shall have names that belong to a namespace different than the namespace declared by this document. Implementations may ignore extension elements belonging to an unknown namespace. Extension elements shall directly or indirectly extend BaseSegmentType. Informative note: Extension elements should have unique, descriptive names and should appear only once in a given Sequence. Extension specifications that allow multiple instances of an element in a Sequence should provide both a means of differentiating instances within a Sequence and a means of linking related instances in separate Sequences. 7.7 Resource Structure All Resources elements share common attributes and are therefore specified as a set of types derived from a common structure, namely the BaseResourceType structure BaseResourceType Id The BaseResourceType describes a generic resource intended to be reproduced as part of a Segment. Child elements are defined in the following subsections. The Id element uniquely identifies this particular Resource instance. It shall be encoded as a urn:uuid per [RFC 4122]. It does not identify the underlying essence, data or Track File AnnotationText [optional] The AnnotationText element shall be a free-form, human-readable text annotation associated with the Resource. It is meant strictly as a display hint to the user. The optional language attribute is an xs:language language code and indicates the text language of the content of the element. If the language attribute is not present, the default value en shall be used IntrinsicDuration The IntrinsicDuration element shall be the Native Duration of the Asset. It shall not take into account the values of the EntryPoint and SourceDuration elements. Unless the optional EntryPoint and Duration parameters are specified, playback of the asset shall start at the Native Start Point and terminate at the Native End Point of the Track File. IntrinsicDuration shall be expressed in units of 1/IntrinsicEditRate, i.e. as a count of Editable Units IntrinsicEditRate [optional] The IntrinsicEditRate element shall be the Edit Rate of the referenced resource. It provides the time base for the IntrinsicDuration, EntryPoint and SourceDuration of the Resource. If absent, it shall be assumed to be equal to the Edit Rate of Composition Playlist EntryPoint [optional] The EntryPoint element identifies the Edit Unit where playback shall start (the first editable unit of the playable region). It shall be encoded as an integer number and shall be expressed in units of 1/ 78
79 IntrinsicEditRate, i.e. as a count of Editable Units. This element shall be required if the desired Entry Point is greater than 0 (zero). If this element is not present, a value of 0 shall be assumed and Asset playback shall start at the Native Start Point of the resource RepeatCount [optional] The RepeatCount element defines number of times to repeat the section of edit units defined by EntryPoint and SourceDuration. It shall be equal to one if absent SourceDuration [optional] The SourceDuration element defines the duration of the Playable Region of the resource. It shall be encoded as an integer number and shall be expressed as an integer number in units of 1/ IntrinsicEditRate, i.e. as a count of Editable Units. If present, this value shall be an integer between 0 (zero) and IntrinsicDuration EntryPoint (the number of edit units between the EntryPoint and the Native End Point the Track File). If this element is not present, Resource playback shall stop after (IntrinsicDuration EntryPoint)/ IntrinsicEditRate seconds, i.e. at the Native End Point of the Asset TrackFileResourceType The TrackFileAssetType shall be derived from BaseResourceType. It describes an asset based on an external Track File. The defining specification for each Track File shall specify the location of the identifying UUID AssetId The AssetId shall be encoded as a urn:uuid per [RFC 4122]. It uniquely identifies the underlying Track File. Note: Mapping of UUID values to actual Track File locations is beyond the scope of this document SourceEncoding The SourceEncoding element shall be a UUID that shall reference an EssenceDescriptor element of the EssenceDescriptorList element defined above KeyId [optional] The KeyId element uniquely identifies the cryptographic key used to encrypt the underlying track file. This element shall contain a key identifier encoded as a urn:uuid value. KeyId shall be present if any portion of the underlying track file is encrypted. Note: The mapping of key identifiers to actual key values is beyond the scope of this document Hash [optional] The Hash element shall contain the hash (message digest) of the underlying track file computed using the SHA-1 message digest algorithm [RFC 3174]. When authenticated by the digital signature in the Composition Playlist (see ), it should be used to verify the integrity and authenticity of the underlying track file. The resulting 160-bit integer shall be encoded using Base64 representation [RFC 2045] StereoImageTrackFileResourceType The StereoImageTrackFileResourceType shall be derived from the BaseResourceType. The StereoImageTrackFileResourceType shall contain two elements, LeftEye and RightEye, each derived from TrackFileResourceType and each referencing a single Track File containing monoscopic essence. 79
80 The IntrinsicEditRate and SourceDuration of an instance of StereoImageTrackFileResourceType shall be equal to the corresponding parameters contained in its LeftEye and RightEye elements MarkerResourceType The MarkerResourceType shall be derived from the BaseResourceType. It contains content markers, e.g. FFOC. Just as for image and audio assets, the marker asset has a timeline. The Offset of each Marker is the position from the start of the timeline and the IntrinsicDuration of the timeline shall correspond to the Offset of the last Marker MarkerList The MarkerList element shall contain a list of Marker elements. Marker Labels should be repeated as there will likely be multiple instances of several types of content segments throughout a given program (i.e. production logos, commercial blacks, etc). The members of the Marker element are defined in the following subsections Label The Label element shall contain a textual representation of the marker. An optional scope attribute with default URI value of determines the permissible values of the element. Marker Description Table 15: Examples of Marker Labels FFBT FFCB FFCL FFDL FFEC FFHS FFMC FFOB FFOC FFOI FFSP First Frame of Bars and Tone First Frame of Commercial Blacks First Fame of Company/Production Logo First Frame of Distribution Logo First Frame of End Credits. First displayable frame of content that contains any intensity of the End Credits (a non zero alpha value), which appear at the end of a feature. First Frame of Head Slate First displayable frame of content that contains any intensity of moving, rolling or scrolling credits (a non-zero alpha value), which appear at the end of the feature. First Frame of Ratings Band. First displayable frame of content of the Rating Band, which is usually a slate at the beginning of a feature. First Frame of Composition. The first frame of a composition that is intended for display. First Frame of Intermission. First Frame of Digital Sync Pop 80
81 FFTC FFTS FTXC FTXE FTXM LFBT LFCB LFCL LFDL LFEC LFHS LFMC LFOB LFOC LFOI LFSP LFTC LFTS LTXC LTXE LTXM First Frame of Title Credits. First displayable frame of content that contains any intensity of the Title Credits (a non zero alpha value), which appear at the beginning of a feature. First Frame of Tail Slate First Frame of Textless Title Credits First Frame of Textless End Credits First Frame of Textless Material Segment Last Frame of Bars and Tone Last Frame of Commercial Blacks Last Frame of Company/Production Logo Last Frame of Distribution Logo Last Frame of End Credits. Last displayable frame of content that contains any intensity of the End Credits (a non zero alpha value), which appear at the end of a feature. Last Frame of Head Slate Last displayable frame of content that contains any intensity of moving, rolling or scrolling credits (a non-zero alpha value), which appear at the end of the feature. Last Frame of Ratings Band. Last displayable frame of content of the Rating Band, which is usually a slate at the beginning of a feature. Last Frame of Composition. The last frame of a composition that is intended for display. Last Frame of Intermission. Last Frame of Digital Sync Pop Last Frame of Title Credits. Last displayable frame of content that contains any intensity of the Title Credits (a non zero alpha value), which appear at the beginning of a feature. Last Frame of Tail Slate Last frame of Textless Title Credits Last Frame of Textless End Credits Last frame of Textless Material Segment AnnotationText [optional] The AnnotationText element shall be a free-form, human-readable annotation associated with the marker. It is meant strictly as a display hint to the user. The optional language attribute is a standard XML language code and indicates the text language of the content of the element. If the text language 81
82 attribute is not present, the default value en shall be used. The field does not require a size limit, but for practical purposes, the field shall allow at least 2048 characters Offset The Offset element defines the position of the marker from the start of the marker asset. It shall be represented as integer number of 1/IntrinsicEditRate units. 7.8 Constraints The following is a list of items that are intended to be constrained within the Composition and therefore reflected in the CPL Constant Frame and Sample Rate The Composition shall only consist of a single, common edit rate. The Composition shall only consist of the same sample rate content in all of the audio track files; all essence referenced by the Composition shall have the same edit rate and sample rate Minimum Track File Duration The duration of any asset contained within a Sequence, as indicated by the SourceDuration and IntrinsicDuration elements, shall be no less than one frame. In the case of fractional samples per frame that exists with certain frame rates (e.g., 29.97, and 48kHz or 96kHz) the minimum audio track file duration shall be 5 frames beginning and ending with an integer sample number. Refer to Table 5: Allowable Samples/Frame for Specified Frame Rates* Minimum Sequence Duration The duration of any Sequence shall be the same as the minimum track file duration Fractional Sample Editorial Granularity Fractional samples shall only allow audio editorial granularity of once every 5 frames Stereoscopic Content Stereoscopic content shall be stored in the Composition as separate track files for each eye in a Dual Track format. Each Track shall conform to the same parameters as a monoscopic Track. 7.9 EssenceDescriptor Information The essence descriptors presented below are used to carry the encoding parameters of the MXF track files in the CPL. They are exposed in a CPL using the EssenceDescriptorList element (defined in Sec ). The values presented represent the sum of all requests for metadata agreed to by the IMF participants. Because they are largely duplicative of the metadata in the MXF files, it is hoped that instead of developing this parallel set of essence descriptors, the IMF project will instead develop additional metadata items as needed for MXF, and then provide a means for carrying the actual MXF metadata structure (transformed into XML) in the CPL's EssenceDescriptorList element. This information is provided here in human readable form for informational purposes only. Machines should not rely on this information for transcoding or playout and in all cases the Track File metadata takes priority over information provided here. 82
83 7.9.1 Generic SourceMediaDescription The SourceMediaDescription element shall be human readable text, e.g. HDCamSR (1.85 picture, 16.1 sound, dubbed), describing the content source. The optional language attribute is an xs:language language code and indicates the text language of the content of the element. If the language attribute is not present, the default value en shall be used StandardBadge The StandardBadge element shall contain information of the encoding standard used. This is informative information to allow for humans and machine to read StandardsBody The StandardsBody element shall contain a URI [RFC 2396] that uniquely identifies the Standards Body issuing the Standard Label The Label element shall contain a textual representation of the Standards Body, which should be displayed to the user. For each issuing Standards Body, and hence unique URI, there are a number of permissible Label values. The specification of this mapping is beyond the scope of this document TimecodeType The TimecodeType element shall contain information on the time code format used in the underlying asset Image FrameRate The FrameRate element shall contain the number of frames per second of the intended playback rate of the underlying essence (e.g., 24000/1001). It shall be encoded as a rational number of frames per second ContainerHeight The ContainerHeight element shall contain the vertical pixel count of the underlying essence. It shall be encoded as an integer number of pixels ContainerWidth The ContainerWidth element shall contain the horizontal pixel count of the underlying essence. It shall be encoded as an integer number of pixels ColorSampling The ColorSampling element shall contain the identification of the color sampling method of the underlying essence. It shall be encoded as a string that matches one of the following: 4:4:4, 4:4:4, 4:2:2. 83
84 ColorEncoding The ColorEncoding element shall contain the identification of the color encoding method of the underlying image track file. It shall be encoded as a string that matches one of the following: RGB, XYZ, YCbCr SampleType The SampleType element shall contain the identification of the pixel encoding method of the underlying essence. It shall be encoded as a string that matches one of the following: uint, half, float ImageBitDepth The ImageBitDepth element shall contain the number of bits assigned to each image component of the underlying essence. It shall be encoded as an integer number in the set {10, 12, 16} ImageAspectRatio The ImageAspectRatio element shall define the aspect ratio of the image information contained in the underlying image track file. It is represented as a rational number, e.g. 16/9, and applications should convert the ratio to a decimal number to match current practice ActiveCoordinates The ActiveCoordinates element shall define the active image rectangle of the underlying image essence. It is represented by the coordinates to the top left and lower right vertex of the rectangle CodeValueBlack The CodeValueBlack element shall define the pixel value of a black pixel in the underlying image essence CodeValueWhite The CodeValueWhite element shall define the pixel value of a white pixel in the underlying image essence Audio SampleRate The SampleRate element shall contain the sampling rate of the underlying audio essence (e.g., 48,000). It shall be encoded as a rational number of Hz AudioSamplesPerFrame The AudioSamplesPerFrame element shall contain the integer of the samples per frame of the underlying audio essence (e.g., 2002) SoundfieldConfig The SoundfieldConfig element shall contain the Soundfield configuration of the underlying audio essence (e.g., LtRt, 5.1 etc.) SampleBitDepth The SampleBitDepth element shall contain the integer number of bits per sample word, which shall be 24 bits. 84
85 ChannelCount The ChannelCount element shall contain the number of audio channels of the underlying audio track file Language [optional] The Language element shall reflect the primary spoken language of the sound material of the underlying sound essence. The element value is encoded as an xs:language language code and indicates the spoken language of the content. The absence of the element shall indicate that no spoken language is associated with the asset Subtitle and Caption Essence Format [optional] The Format element defines the format of the underlying subtitle or caption essence Language [optional] The Language element shall reflect the primary text language used by the subtitle or caption essence. The absence of the element shall indicate that no primary text language is associated with the asset. It is represented as an xs:language value. 85
86 8 Output Profile List 8.1 Introduction Output Profile List Definition The Output Profile List (hereby called OPL ) is an optional set of information that may be used in conjunction with a CPL to specify particular content provider output preferences. In a typical workflow, it would be specifically designed to read and point to a particular CPL within a particular IMF. It would then be able to facilitate the passage of the composition s content to a downstream device. In doing so, it would also pass the content provider s preferences (if included) through to a downstream device in order to facilitate the execution of these preferences in the downstream device (i.e. automation). An OPL is not required for a CPL to function within an IMF; however, this is required for an Interoperable Master Package or IMP. In the absence of an OPL, a CPL is used as the default input to the playout engine and the IMF output would be an exact reflection of the content within that composition with no processing applied. (Note that the simple unwrapping and decoding of the IMF track files that occurs in the downstream IMF device is not considered processing). An OPL shall be included in an IMP as a way to deliver these preferences along with the other components contained within the IMP. At a minimum, this should be a Simple OPL (see section 8.6.1). It should also be delivered separately from an IMP via another means, such as , FTP, etc. The Basic Level IMF shall include the Simple OPL. The Output Profile List can be viewed as a global post-processing step to the self contained digital media program specified by the Composition PlayList. The OPL provides a mechanism to take a fully functional and conformed program specified by the CPL and aides in generating multiple distribution files for the multiple distribution channels. OPLs with more complex instructions are known as Complex OPLs and shall be included in the Extended Level IMF. The motivation for the OPL arose from the fact that in generating each of the downstream distribution files there needs to be a way of specifying the program independent of the mechanism of how to transform (Transcode) it. This way one program specification (Composition PlayList) should generate multiple files at the desired raster, bit rate and codec. The OPL is viewed as transitory in the lifecycle of the components that make up the IMF. If the downstream devices change, or the process of generating the output file changes, then a change in the OPL would be necessary. Most of what is included in this section shall be in the Extended Level. Basic Level shall include a Simple OPL (defined in sections and 8.3) as well as any item specifically called out as Basic Level in the sections below. 8.2 Output Profile List Overview Functional Framework The OPL is viewed as an extensible set of instructions that operate on the referenced CPL within the OPL. These instructions should be at a minimum a simple pointer to a CPL (also known as a Simple OPL), which in turn points to the IMF track files needed to create the composition that the CPL has specified. In this minimum case, it would pass the content through with no processing. Basic Level IMF shall include Simple OPLs. A further extension of the simple OPL would be to add preferences for the desired output for a particular CPL. This should be described in terms of the Image Output Parameters and Audio Output parameters, and the raster size, bit rate, compression codec, and frame rate. 86
87 The OPL s extensibility should be further leveraged to add pre-processing steps that are needed to carry out the transformation to the desired codec. These would include specific cropping, scaling and other transformations. The OPL shall not interpret the Track File(s) or the CPL to generate preprocessing instructions; rather, these operations should be specifically called out in the OPL. In addition to pre-processing, the OPL should also contain content provider preferences on how a downstream device applies the pre-processing (e.g., specific scaling such as Lanczos). If desired, the OPL should also contain Content Provider preferences for how a downstream device is to manipulate the data received from the IMP. These parameters could include specific calls to encoding parameters (e.g. quantization tables), or should reference specific manufacturer encoding parameters. All extensions above the Simple OPL create the Complex OPL which shall be included in Extended Level. Most of what is detailed in this section shall be included in Extended Level. Simple OPL information that shall be in Basic Level is explicitly stated as such in this section OPL Relationship Overview Functional Framework The following diagram, Figure 23 - Output Profile List Relationship Overview, illustrates the relationship of the Output Profile List to its various components. Figure 23 - Output Profile List Relationship Overview An OPL references a Composition PlayList. The composition playlist provides the program information, and any transfromations that are required on the overall composition list are performed within the OPL. A variety of transformations are envisioned today, and it is expected that the need to affect additional transformations will arise. Bearing this in mind, the OPL is broken into specific sections, permiting additional sections to be added if the need arises. Currently the main sections are as follows: 87
88 8.2.4 Output Parameters The Output Parameter section is designated as the section wherein the properties of the Output Image and Audio should be specified. In this section it is only necessary to call out the Codec, the standard, the raster format, the bit rate, the sampling rate, etc. The specifics on how to encode the desired output is specified in later sections if necessary Pre-Process Parameters The Pre-Process Parameters Section provides the capability to perform numerous (unlimited) groups of pre-processing on the video or audio segment. Each section is delinated with specific blocks, (e.g <process01> </process01>). The objective here is to permit the content owner to specify the order in which specific processes should be carried out, and if necessary exactly how the process should be carried out. So in the case of a scaling operation, a block of instruction could specify that a Lancoz 3- lobe filter be used with specific coefficients. Each process block is processed in numerical order, therefore if a cropping is desired before scaling, then the cropping should be placed in process block N, and the scaling in block T, so that N is < T. The PreProcessing Parameter Section provides for the capability of adding overlays/burnins across the entire program, and also the capability of generating Fade-Ups and Fade-Outs at the beginning/end of the program as deemed necessary by the output specification ColorSpace Conversion Parameters The Colorspace Conversion Parameter section provides the capabilty to enforce a specific form of color space conversion from one color space to another, permitting the description of either Matrices or a fully specified Three dimensional color look-up-table Encoding/Transcoding Parameters The Encoding/Transcoding Parameters Section provides the capability for the content owner to specify the specific process of encoding the material. This would include the ability to specify specific quantization tables, DCT or wavelet coeffecients etc. The preference would be to have manufacturers use a common language, however during initial implementations, in the absence of such a common language, these parameters should refer to vendor-specific tags. With these tags, this section would provide the capability of calling out a vendor specific transcode or profile, e.g., AR_MPEG2_PROFILE_43. This would provide a mechanism for preferred encoding when a specific device is present, and a fallback to a generic encoding when the specific device is not available. Below is a pseudo-xml, highlighting the different sections of an OPL. 88
89 - <OutputProfilelist> # Note the reference to the CPL, compositionplaylistreference <Id>urn:uuid:cde69c7b-a f5-e8ffea82f345</Id> - <CompositionPlayListReference> <ID>urn:uuid:bbf69c7b-a f5-e8ffea123fa1</ID> </CompositionPlayListReference> <OutputText>US English Mbps Master</OutputText> <Language>EN</Language> <Country>US</Country> # Below specifies desired Output Format for Video/Image <ImageOutputFormat /> # Below specifies desired Output Format for Audio <AudioOutputFormat /> # Below specifies the desired Encoding/Transcoding - <EncodingFormat> <StandardsBody /> <Label /> <PreferredEncoder /> <GenericEncoder /> </EncodingFormat> # Below specifies the desired ColorSpaceTransformation - <ColorTransforms> <OutputColorSpace /> <PreferredConversion /> #; 3d Lut data? <GenericConversion /> </ColorTransforms> # Below specifies the desired Pre-Process Parameters - <PreProcessOps> <process01> <OverlayParameters /> <HeadTransition /> <TailTransition /> </process01> <process02/> </PreProcessOps> </OutputProfilelist> 8.3 Output Profile List Fundamental Requirements (Basic Level) Open Standard The OPL standard shall be based upon an open worldwide standard. This format is encouraged to be a license-free technology. It shall be a complete standard that equipment receiving a compliant OPL should process and interpret unambiguously Interoperable The OPL format shall have an open framework that is conducive to interoperability. 89
90 8.3.3 Scalable The OPL format shall accommodate any number of Levels of instructional components. There is no limit on the number of processing steps or in the size of the file Extensible The OPL format shall allow for new Digital Video and Digital Audio features to be addressable within the List Synchronization Not Applicable Human Readable Metadata Human readable Metadata shall be in English (default), but can be provided in other languages as well File Format The Output Profile List shall use the secure (digitally signed) text-based XML file format. More specifically, the structures defined in this document are represented using the Extensible Markup Language (XML) [XML 1.0], and specified using XML Schema [XML Schema Part 1: Structures] and [XML Schema Part 2: Datatypes]. This specification shall be associated with a unique XML namespace name [Namespaces in XML]. The namespace name shall be the string value This namespace name conveys both structural and semantic version information, and serves the purpose of a traditional version number field. Table 11: XML Namespaces lists the XML namespace names used in this specification. Namespace names are represented as Uniform Resource Identifier (URI) values [RFC 2396]. These values shall be considered as simple strings, and applications should not attempt to resolve them as URLs. Table 16: XML Namespaces Qualifier URI Cpl Xs Ds The namespace qualifier values (namespace prefixes in XML jargon) used in this document (cpl, xs, ds), are not normative values. Implementations shall perform correctly with any XML compliant namespace prefix value that is associated with a URI from Table 11: XML Namespaces, above. Datatypes from other schemas that are used in this document will be prefixed with the appropriate namespace qualifier (e.g. xs:datetime). See [XML Schema Part 2: Datatypes] and [XML-Signature Syntax and Processing] for further information about these types. The MIME type [IETF RFC 2046] for a document containing a single Output Profile List element as its root shall be text/xml. 8.4 Output Profile List Constraints The following is a list of items that are intended to be constrained within the OPL. 90
91 8.4.1 Reference to a CPL (Basic Level) The OPL shall contain a reference to a CPL. An OPL without a CPL is an OPL that is not executable Precedence of Operations (Extended Level) As the OPL provides the capability of processing, the order of precedence shall be established by the order in which the process blocks are numerically ordered. Therefore, Process00 shall be executed and then Process01 and then Process Output Profile List Structure OPL shall be represented by a unique XML element, the OutputProfilelist element. The OPL shall be encoded using the UTF-8 character encoding [XML 1.0]. The OPL shall include the following fields in Basic Level unless stated as optional, in which case they should be included in Extended Level. Examples of different OPLs are given in Section 8.6. The main sections are: Header ImageOutputFormat [optional] AudioOutputFormat [optional] ColorTransforms [optional] EncodingFormat [optional] PreProcessOps [optional] General Information The OPL should provide the following general information to allow quick access to the information about the output desired. This should allow either humans or automated systems to source the general information. Below is the list of both the informative information and required information to complete the structure of the OPL. Some items are optional and therefore not required for a compliant OPL. Other sections are described in more detail in later sections. Unique ID Annotation Text [optional] Issue Date Issuer [optional] Creator [optional] Content Title Text [optional] Content Kind (e.g., Feature, Trailer, Logo, Advertisement, Episode) [optional] CompositionPlayListReference OutputText [optional] Note: If an optional section or element is not provided, the source value shall be used Header [required] Listed below are the element requirements and descriptions Unique Id The Id element uniquely identifies the OPL for tracking purposes. It shall uniquely identify the content represented by the OPL. It shall be encoded as a urn:uuid per [RFC 4122]. 91
92 8.5.4 AnnotationText [optional] The AnnotationText element shall be a free-form, human-readable annotation describing the OPL. It is meant strictly as a display hint to the user IssueDate The IssueDate element shall be used to define the time and date at which the OPL was issued. It should be displayed to the user. It shall be encoded as an xs:datetime Issuer [optional] The Issuer element shall be a free-form, human-readable annotation that shall identify the entity that created the OPL. It is meant strictly for display to the user Creator [optional] The Creator element shall be a free-form, human-readable annotation that shall identify the application used to create the OPL, the Facility that created the OPL and the operator that created the OPL. It is meant strictly for display to the user ContentTitleText The ContentTitleText element shall contain a human-readable title for the OPL e.g. The Jazz Singer. It is strictly meant as a display hint to the user. The optional language attribute is an xs:language language code and indicates the language of the content of the element. If the language attribute is not present, the default value en shall be used ContentKind [optional] The ContentKind element defines the kind of material referred to by the OPL. It is meant to be both human and machine-readable. Table 13: Examples of Content Kind, shown in Section 7, shows examples of Content Kind CompositionPlayList Reference The CompositionPlayListReference element defines the reference to the CPL(s) by calling out the Unique ID of the CPL(s). It shall uniquely identify the content that the OPL will access. It shall be encoded as a urn:uuid per [RFC 4122] OutputText [optional] The OutputText element shall be a free-form, human-readable annotation that shall provide information on the desired output. It is meant strictly for display to the user ImageOutputFormat [optional] The ImageOutputFormat element defines the output format of the MainPicture of material referred to by the Composition PlayList. It is meant to be both human and machine-readable. An example of the ImageOutputFormat is shown below: 92
93 <ImageOutputFormat> - <BitRate> - <ConstantBitRate> <Value>50</Value> <Label>Mbps</Label> </ConstantBitRate> <AverageBitRate /> <MaxBitRate /> <MinBitRate /> </BitRate> <BitDepth>8</BitDe pth> BitRate The BitRate is a compound element that provides the desired output bit rate. The BitRate element provides for the capability of describing the ConstantBitRate, or the AverageBitRate, MaxBitRate and the MinBitRate. Each of these values is specified with a Value and Label pair. An example is shown below: <BitRate> - <ConstantBitRate> <Value>50</Value> <Label>Mbps</Label> </ConstantBitRate> </BitRate> or 93
94 <BitRate> <AverageBitRate > <Value>50</Value> <Label>Mbps</Lab el> </AverageBitRate > <MaxBitRate > <Value>75</Value> <Label>Mbps</Label> </MaxBitRate > <MinBitRate > <Value>25</Value> <Label>Mbps</Label> </MinBitRate > </BitRate> ColorEncoding The ColorEncoding element shall contain information of the color encoding parameters of the MainPicture element. This is informative information to allow for humans and machine to read and, if necessary, manipulate the MainPicture asset of the Composition to conform the color information to the OutputColorEncoding specification. Additional elements that should optionally be specified here are: StandardsBody [optional] The StandardsBody element shall contain a textual representation of the different standards that specify colorspaces and should be displayed to the user. There are a number of permissible values, such as ITU, CIE etc ColorSpace [optional] The ColorSpace element shall contain a textual representation of the different standards colorspaces and the associated gamut and should be displayed to the user. There are a number of permissible values, such as Rec-709, Rec-601, XYZ, etc ChromaFormat [optional] The ChromaFormat element shall contain a textual representation of the chroma subsampling and should be displayed to the user. There are a number of permissible values, such as 4:4:4, 4:2:2, 4;2:0, etc ChromaEncoding [optional] The ChromaEncoding element shall contain a textual representation of the arrangement of the color components and should be displayed to the user. There are a number of permissible values, such as YCbCr, RGB, etc. 94
95 BitDepth [optional] The BitDepth element shall contain a textual representation of the bit depth of each of the channels of the Image and should be displayed to the user. There are a number of permissible values, such as 8, 10, 12, 16, etc TransferFunction [optional] The TransferFunction element shall contain a textual representation of the different ways to compand the data within the image, and should be displayed to the user. There are a number of permissible values, such as Linear, Log, Power, etc CodeRange [optional] The CodeRange element shall contain a textual representation of the limitation applied on the dynamic range of the data and should be displayed to the user. There are a number of permissible values, such as Full, Head, limited, etc. <ColorEncoding> <StandardsBody>ITU</StandardsBody> <ColorSpace>Rec-601</ColorSpace> <ChromaFormat>4:2:2</ChromaFormat> <ChromaEncoding>YCBCR</ChromaEncoding> <BitDepth>10</BitDepth> <TransferFunction>Linear</TransferFunct ion> <CodeRange>Limited</CodeRange> # </ColorEncoding> Compression Standard The CompressionStandard element is a compound element that contains information on the StandardsBody that provides the Codec, the type of Compression, and additional parameters that need to specify the PictureCoding, ProfileType and LevelType CompressionType [optional] The CompressionType element shall contain a URI [RFC 2396] that uniquely identifies the CompressionType Label [optional] The Label element shall contain a textual representation of the CompressionType, which should be displayed to the user. For each CompressionType, and hence unique URI, there are a number of permissible Label values. An example of such is shown below in Table 17: Examples of CompressionType Elements. Note: The specification of this mapping is beyond the scope of this document. 95
96 Table 17: Examples of CompressionType Elements Label MPEG2 MPEG4 JPEG2000 URI Need Valid URI Need Valid URI Need Valid URI PictureCoding [optional] The PictureCoding element shall contain a textual representation of the type of Picture encoding, which should be displayed to the user. For each type, there are a number of permissible Label values. An example of such is shown below in Table 18: Examples of PictureCoding Elements. Note: The specification of this mapping is beyond the scope of this document. Table 18: Examples of PictureCoding Elements Label I-FRAME LGOP URI Need Valid URI Need Valid URI ProfileType [optional] The ProfileType element shall contain a textual representation of the type of profiles, in the case of MPEG-2 encoding, which should be displayed to the user. For each type, there are a number of permissible Label values. An example of such is shown below in Table 19: Examples of ProfileType elements. Note: The specification of this mapping is beyond the scope of this document. Table 19: Examples of ProfileType elements Label URI SIMPLE_PROFILE Need Valid URI MAIN_PROFILE HIGH_PROFILE Need Valid URI Need Valid URI LevelType [optional] The LevelType element shall contain a textual representation of the type of levels, in the case of MPEG-2 encoding, which should be displayed to the user. For each type, there are a number of permissible Label values. An example of such is shown below in Table 20: Examples of LevelType Elements. Note: The specification of this mapping is beyond the scope of this document. 96
97 Table 20: Examples of LevelType Elements Label URI SIMPLE_PROFILE Need Valid URI MAIN_PROFILE HIGH_PROFILE Need Valid URI Need Valid URI StandardsBody [optional] The StandardsBody element shall contain a URI [RFC 2396] that uniquely identifies the Standards Body issuing the Standard Label [optional] The Label element shall contain a textual representation of the Standards Body, which should be displayed to the user. For each issuing Standards Body, and hence unique URI, there are a number of permissible Label values. An example of such is shown below in Table 21: Examples of Label Elements. Note: The specification of this mapping is beyond the scope of this document. Table 21: Examples of Label Elements Label URI SMPTE STD xxx-y W3 ISO FrameRate [optional] The FrameRate element shall specify the desired framerate of the final output, and is expressed in frames per second. This is informative information to allow for humans and machine to read and, if necessary, manipulate the MainPicture asset of the Composition to conform to the OutputPixelMatrix requirements Crop [optional] The Crop element shall contain the desired Region of Interest of the image. This shall specify the 2 points of the bounding box that defines the region of the image that should be utilized for downstream processing. This is expressed as the Top Left Corner, and the Bottom Right corner of ROI, and is expressed in horizontal and vertical pixel counts of the MainPicture element. This is informative information to allow for humans and machine to read and, if necessary, manipulate the MainPicture asset of the Composition to conform to the OutputPixelMatrix requirements. 97
98 <Crop> <x1>0</x1> <y1>0</y1> <x2>1919</x2> <y2>1079</y2> </Crop> CanvasCoordinates [optional] The CanvasCoordinates element shall contain the Region of Interest of the image. This shall specify the 2 points of the bounding box that defines the region of the image that should be utilized. This is expressed as the Top Left Corner, and the Bottom Right corner of ROI, and is expressed in horizontal and vertical pixel counts of the MainPicture element. This is informative information to allow for humans and machine to read and, if necessary, manipulate the MainPicture asset of the Composition to conform to the OutputPixelMatrix requirements. <CanvasCoordinates> <x1>0</x1> <y1>0</y1> <x2>1919</x2> <y2>1079</y2> </CanvasCoordinates> 98
99 AudioOutputFormat [optional] The AudioOutputFormat element defines the output format of the MainSound material referred to by the Composition PlayList. It is meant to be both human and machine-readable. The example below shows some of the AudioOutputFormat elements: <AudioOutputFormat> <SampleRate>48000</SampleRate> <BitDepth>24</BitDepth> <SamplesPerFrame>800.8</SamplesPerFr ame> - <CompressionStandard> - <CompressionType>LPCM</CompressionTy pe> <Label>PCM</Label> </CompressionStandard> <PitchCorrection>No</PitchCorrection> </AudioOutputFormat> SampleRate [optional] The SampleRate element shall contain information about the processing and parameters to be applied to the MainSound element to obtain the desired sample rate for the output e.g the specific DSP alogrithm to be applied and recommended settings BitDepth [optional] The BitDepth element shall contain information about the processing and parameters to be applied to the MainSound element to obtain the desired BitDepth for the output e.g. truncation, dither values, etc SamplesPerFrame [optional] The SamplesPerFrame element shall contain information about the processing and parameters to be applied to the MainSound element to obtain the desired samples per frame for the output e.g. a specific resampling process or algorithm SpeedChange [optional] The SpeedChange element shall contain information about the processing and parameters to be applied to the MainSound element to obtain the desired speed (frame rate) of the output e.g. the specific processes to be applied, their order and recommended settings PitchCorrection [optional] The PitchCorrection element shall contain information about the processing parameters to be applied to the MainSound element to correct for speed changes applied to the output using the SpeedChange item e.g. the specific processes to be applied, their order and recommended settings. 99
100 CompressionStandard [optional] The CompressionStandard element shall contain information of the compression encoding parameters of the MainSound element. This is informative information to allow for humans and machine to read and, if necessary, manipulate the MainSound asset of the Composition to conform the sound information to the Compression specification BitRate [optional] The BitRate element shall contain information about the processing parameters to be applied to the MainSound element to obtain the desired compression bit rate for the output, which may be comprised of several metadata items. For constant bit rate encoding such as AC3, this is the BitRateValue (e.g. 384) and the BitRateScale (Kbits/sec, Mbits/sec). For variable bit rate encoding such as DTS-MA, this is the maximum, minimum and average BitRateValues and their BitRateScales StandardsBody [optional] The StandardsBody element shall contain a URI [RFC 2396] that uniquely identifies the Standards Body issuing the Standard Label [optional] The Label element shall contain a textual representation of the Standards Body, which should be displayed to the user. For each issuing Standards Body, and hence unique URI, there are a number of permissible Label values AudioConfig [optional] The AudioConfig element shall provide the information on how the individual channels should be arranged in the output. This is informative information to allow for humans and machine to read and, if necessary, manipulate the Composition to conform to the desired configuration of the output format. This tag may change once the Multi-Channel Audio specification is approved ChannelNN [optional] The ChannelNN element shall contain the specific Channel that is being described where NN is the channel number Language [optional] The Language element shall contain an informative description of the language used for the specific ChannelNN Config [optional] The Config element shall contain an informative description of the total number of channels used for the specific ChannelNN Channel [optional] The Channel element shall contain an informative description of the specific channel. Examples include L for Left and R for Right Label [optional] The Label element shall contain a textual representation of the AudioConfig, which should be displayed to the user. 100
101 <AudioConfig> - <Channel01> <Language>English</Langu age> <Config>2.0</Config> <Channel>L</Channel> </Channel01> - <Channel02> <Language>English</Langu age> <Config>2.0</Config> <Channel>R</Channel> </Channel02> - <Channel03> <Language>ME</Language> <Config>2.0</Config> <Channel>L</Channel> </Channel03> - <Channel04> <Language>ME</Language> <Config>2.0</Config> <Channel>R</Channel> </Channel04> </AudioConfig> OutputSpeedOffset [optional] The OutputSpeedOffset element shall contain the rational percentage offset intended to be applied to the Composition. This is informative information to allow for humans and machine to read and, if necessary, manipulate the Composition to conform to the desired running time of the output ColorTransforms [optional] Within the ColorTransforms section, the provision is provided for conversions from 1 color space to another, the ability to decimate the color signal in the desired fashion, apply look-up-tables, either 1D, 3D or matrices. An example is given below: 101
102 <ColorTransforms> - <ColorEncoding> <StandardsBody>ITU</StandardsBody> <ColorSpace>Rec-601</ColorSpace> <ChromaEncoding>YCBCR</ChromaEncoding> <BitDepth>10</BitDepth> <TransferFunction>Linear</TransferFunction> <CodeRange>Limited</CodeRange> # </ColorEncoding> - <ColorSampling> <Filter>Mean</Filter> <ChromaFormat>4:2:2</ChromaFormat> </ColorSampling> </ColorTransforms> EncodingFormat [optional] Within the EncodingFormat section, the objective is to provide the content provider with full control of the encoding/transcoding process. This should be accomplished via providing specific encoding instructions that are specific to a particular manufacturer, or should include a reference to a manufacturer s pre-packaged profiles. Additionally, it should also provide a low level granularity to the order in which specific encoding parameters should be identified. Note: The specification of the syntax here requires additional manufacturer input and is currently beyond the scope of this document PreProcessOps [optional] Maintained within process blocks, i.e., element named ProcessNN (where N >0), this set of operations enables a wide range of processing capabilities. Examples would include scaling, overlay addition, time code changes, color sub sampling, etc. The processing block is sequenced in numerical order, with Process01 being the first one that is processed. The minimal requirement is that a Label element be provided so that informative information is available to the machine to read and, if necessary, manipulate the Composition to conform to the desired Output. A few examples are shown below Scaling <Process01> <Label>Resize</Label> - <Scale> <Filter>Lanczos</Filter> <FilterSetting01>3- Lobe</FilterSetting01> <HSize>720</HSize> <VSize>486</VSize> </Scale> </Process01> 102
103 TimeCodeChange OverlayType <Process03> <Label>Timecode Change</Label> - <TimecodeOutput> <TCRate>59.94</TCRate> <TCRaster>I</TCRaster> <TCType>DF</TCType> - <TCChange> <Pulldown>Yes</Pulldown> #Use 3:2 Pulldown <AFrame>01:00:00:00</AFrame> #Location of A Frame </TCChange> </TimecodeOutput> </Process03> <OverlayItem> <OverlayType>text</OverlayType> <OverlaySourceID /> <OverlaySourceColorSpace>graphic</OverlaySo urcecolorspace> - <TextItem> <Annotation>Property of Warner Bros.</Annotation> <Color>white (or #FFFFFFF)</Color> <OutlineColor>black (or #000000)</OutlineColor> <Font>arial</Font> <Justification>left</Justification> </TextItem> 8.6 OPL Examples The following examples illustrate various types of OPL A Simple OPL (Basic Level) This is the minimal OPL that simply calls out a reference to a composition playlist. It is up to the facility to apply any of the transformations desired to generate any desired output. 103
104 - <OutputProfileList> # Below specifies Preamble, Note the reference to the CPL, compositionplaylistreference <Id>urn:uuid:cde69c7b-a f5-e8ffea82f345</Id> <AnnotationText>US English 2.35 Master</AnnotationText> <IssueDate> T03:21:28-07:00</IssueDate> <Issuer /> <Creator /> - <CompositionPlayListReference> <ID>urn:uuid:bbf69c7b-a f5-e8ffea123fa1</ID> </CompositionPlayListReference> </OutputProfileList> A Level-1 Complex OPL (Extended Level) This is the first level of Complex OPL. It calls out the desired Output format from an Image and Audio format. It is up to the post facility to determine the best way to deliver the desired output. In this example, the desired output is a 1920x1080, 50 Mbps, MPEG-2, I-Frame only file encoded as HP-HL, with the Audio as a MPEG-1, Layer2 at 48 Khz. Notice that the OPL only provides a reference to the Composition PlayList, and the desired output. It does not provide any direction on how the final file should be generated. 104
105 - <OutputProfileList> # Below specifies Preamble, Note the reference to the CPL, compositionplaylistreference <Id>urn:uuid:cde69c7b-a f5-e8ffea82f345</Id> <AnnotationText>Avatar_FTR_S_EN- XX_US_PG13_51_2K_DI_ _PX</AnnotationText> <IssueDate> T03:21:28-07:00</IssueDate> <Issuer /> <Creator /> <ContentTitleText /> <ContentKind /> - <CompositionPlayListReference> <ID>urn:uuid:bbf69c7b-a f5-e8ffea123fa1</ID> </CompositionPlayListReference> <OutputText>US English Mbps Master</OutputText> <Country>US</Country> - <ImageOutputFormat> - <BitRate> - <ConstantBitRate> <Value>50</Value> <Label>Mbps</Label> </ConstantBitRate> <AverageBitRate /> <MaxBitRate /> <MinBitRate /> </BitRate> <BitDepth>8</BitDepth> - <ColorEncoding> <ColorSpace>Rec-709</ColorSpace> <ChromaFormat>4:2:2</ChromaFormat> </ColorEncoding> - <CompressionStandard> <StandardsBody>MPEG-LA</StandardsBody> - <CompressionType> <Label>MPEG2</Label> <PictureCoding>I-FRAME ONLY</PictureCoding> <ProfileType>HIGH_PROFILE</ProfileType> <LevelType>HIGH_LEVEL</LevelType> </CompressionType> </CompressionStandard> - <SpatialParameters> <CanvasCoordinates> <x1>0</x1> <y1>0</y1> <x2>1919</x2> <y2>1079</y2> </CanvasCoordinates> - <Scale> <xscale>1.0</xscale> <yscale>1.0</yscale> </Scale> + <Crop> </SpatialParameters> <FrameRate>24</FrameRate> </ImageOutputFormat> # Below specifies desired Output Format for Audio - <AudioOutputFormat> <AudioConfig /> <SamplingFreq>48000</SamplingFreq> <BitDepth>16</BitDepth> - <CompressionStandard> <StandardsBody>MPEG-LA</StandardsBody> - <CompressionType> <Label>MPEG1 Layer2</Label> </CompressionStandard> </AudioOutputFormat> </OutputProfileList> 105
106 8.6.3 A Level-2 Complex OPL (Extended Level) This example illustrates another level of complexity for an OPL, creating an NTSC from an HD IMP. In this example, the original IMP contains the following content: 4:4:4 10-bit RGB DPX frames at 24Hz in Rec.709 Active picture at 1920x English, 5.1 M&E, 2.0 English, 2.0 M&E The Output Profile List shall create the following content: bit YC B C R at 59.94Hz in Rec.601 (uncompressed) 16x9 720x486 with letterbox mattes 2.0 English, 2.0 M&E The process to create this 525 master is as follows: 1. Resize the image from 1920x803 to 720x362 using Lanczos 3-Lobe filter 2. Add black (RGB=000) mattes to above and below the image to fill to 720x Convert to 4:4:4 YC B C R using equations in SMPTE 293M 4. Sub-sample to 4:2:2 using Mean filtering 5. Change time code by adding 3:2 pulldown and changing rate to fields/second 6. Resample audio to 2002 samples per frame 106
107 - <OutputProfilelist> # Below specifies Preamble, Note the reference to the CPL, compositionplaylistreference <Id>urn:uuid:abc69c7b-a f5-a8ffea82f345</Id> <AnnotationText>Ratatouille_TH_FEA_DPX-V_1080P23_ENG- ENG_ </AnnotationText> <IssueDate> T03:21:28-07:00</IssueDate> <Issuer>Disney</Issuer> <Creator>Pixar</Creator> <ContentTitleText /> <ContentKind /> - <CompositionPlayListReference> <ID>urn:uuid:aaf69c7b-a f5-e8ffea123fa1</ID> </CompositionPlayListReference> <OutputText>US English 2.39 DPX Master</OutputText> <TotalRunningTime>01:51:15:10</TotalRunningTime> <Language>EN</Language> <Country>US</Country> # Below specifies desired Output Format for Video/Image, but not the steps to get there - <ImageOutputFormat> - <ColorEncoding> <StandardsBody>ITU</StandardsBody> <ColorSpace>Rec-601</ColorSpace> <ChromaFormat>4:2:2</ChromaFormat> <ChromaEncoding>YCBCR</ChromaEncoding> <BitDepth>10</BitDepth> <TransferFunction>Linear</TransferFunction> <CodeRange>Limited</CodeRange> # </ColorEncoding> - <CompressionStandard> <CompressionType>Uncompressed</CompressionType> <Label>None</Label> </CompressionStandard> 107
108 continuation of a Level 2 OPL: <SpatialParameters> - <CanvasCoordinates> <x1>0</x1> <y1>0</y1> <x2>719</x2> <y2>485</y2> </CanvasCoordinates> - <ActiveCoordinates> #Not sure about the tag name <x1>0</x1> <y1>62</y1> <x2>719</x2> <y2>424</y2> </ActiveCoordinates> <PixelAspectRatio>1.21</PixelAspectRatio> </SpatialParameters> - <FrameRate> <Rate>59.94</Rate> <Raster>I</Raster> # Interlaced <TimecodeType>DF</TimecodeType> # Drop-Frame </FrameRate> </ImageOutputFormat> # Below specifies desired Output Format for Audio - <AudioOutputFormat> <SampleRate>48000</SampleRate> <BitDepth>24</BitDepth> <SamplesPerFrame>800.8</SamplesPerFrame> - <CompressionStandard> <CompressionType>LPCM</CompressionType> <Label>PCM</Label> </CompressionStandard> - <AudioConfig> - <Channel01> <Language>English</Language> <Config>2.0</Config> <Channel>L</Channel> </Channel01> - <Channel02> <Language>English</Language> <Config>2.0</Config> <Channel>R</Channel> </Channel02> - <Channel03> <Language>ME</Language> <Config>2.0</Config> <Channel>L</Channel> </Channel03> - <Channel04> <Language>ME</Language> <Config>2.0</Config> <Channel>R</Channel> </Channel04> </AudioConfig> <PitchCorrection>No</PitchCorrection> </AudioOutputFormat>...continued on the next page 108
109 continuation of a complex Level 2 OPL: - <ColorTransforms> - <ColorEncoding> <StandardsBody>ITU</StandardsBody> <ColorSpace>Rec-601</ColorSpace> <ChromaEncoding>YCBCR</ChromaEncoding> <BitDepth>10</BitDepth> <TransferFunction>Linear</TransferFunction> <CodeRange>Limited</CodeRange> # </ColorEncoding> - <ColorSampling> <Filter>Mean</Filter> <ChromaFormat>4:2:2</ChromaFormat> </ColorSampling> </ColorTransforms> # Below specifies the desired Pre-Process Parameters - <PreProcessOps> - <Process01> <Label>Resize</Label> - <Scale> <Filter>Lanczos</Filter> <FilterSetting01>3-Lobe</FilterSetting01> <HSize>720</HSize> <VSize>486</VSize> </Scale> </Process01> - - <Process02> <Label>Timecode Change</Label> - <TimecodeOutput> <TCRate>59.94</TCRate> <TCRaster>I</TCRaster> <TCType>DF</TCType> - <TCChange> <Pulldown>Yes</Pulldown> #Use 3:2 Pulldown <AFrame>01:00:00:00</AFrame> #Location of A Frame </TCChange> </TimecodeOutput> </Process02> </PreProcessOps> </OutputProfilelist> An Overlay Pre-Processing Block for an OPL (Extended Level) This example illustrates how one should execute Overlay pre-processing in an OPL. (Pre-processing section for an OPL) - <Position> #note: assuming upper-left of image (or image sequence) is origin. <x>960</x> <y>1050</y> </Position> <Opacity>100</Opacity> 109
110 <CompositeMethod>over</CompositeMethod> <StartFrame>0</StartFrame> <EndFrame>10000</EndFrame> </OverlayItem> - <OverlayItem> <OverlayType>text</OverlayType> <OverlaySourceID /> <OverlaySourceColorSpace>graphic</OverlaySourceColorSpace> - <TextItem> <Annotation>Property of Warner Bros.</Annotation> <Color>white (or #FFFFFFF)</Color> <OutlineColor>black (or #000000)</OutlineColor> <Font>arial</Font> <Justification>left</Justification> </TextItem> <size>20</size> <scale>1.0</scale> - <Position> #note: assuming upper-left of image (or image sequence) is origin. <x>160</x> <y>1000</y> </Position> <Opacity>100</Opacity> <CompositeMethod>over</CompositeMethod> <StartFrame>20000</StartFrame> <EndFrame>30000</EndFrame> </OverlayItem> </OverlayParameters> - <Transitions> - <TransitionItem> <Type>fadeup</Type> <Duration>48</Duration> <StartFrame>11</StartFrame> <StartASideOpacity>0</StartASideOpacity> <StartBSideOpacity>0</StartBSideOpacity> <EndFrame>58</EndFrame> <EndASideOpacity>100</EndASideOpacity> <EndBSideOpacity>0</EndBSideOpacity> </TransitionItem> - <TransitionItem> <Type>dissolve</Type> <Duration>48</Duration> <StartFrame>100</StartFrame> <StartASideOpacity>0</StartASideOpacity> <StartBSideOpacity>100</StartBSideOpacity> <EndFrame>100101</EndFrame> <EndASideOpacity>100</EndASideOpacity> <EndBSideOpacity>0</EndBSideOpacity> </TransitionItem> </Transitions> </PreProcessOps> 110
111 9.1 Introduction 9 Packaging Packaging is defined as the process of combing elements to prepare them for shipping or transfer. A package shall generally consist of one or more compositions (i.e., CPL and Track Files) along with two more elements, which are specified below. 9.2 Packaging System Overview Functional Framework For the purpose of documenting the specific requirements for an Interoperable Master Format Packaging system, it is helpful to divide the system into a set of components. The performance requirements for each of these components will be described in the following sections: Distribution Package CPL and Track File files plus package metadata. The package metadata includes a Digital Signature, which allows the authenticity of the package to be tested (e.g., after delivery to a distribution channel). Packing List A container for package metadata: list of files included in the Package. Asset Map A container for package metadata: provides a map of UUID values to data storage locations (e.g. file system paths) Packaging Concepts It is common practice to divide content into versions for distribution. These versions or parts of versions or multiple versions shall need to be transported between facilities. The mechanism to accomplish this is to create an Interoperable Master Package or IMP. This mechanism is described below in the following sections. 9.3 Distribution Package Introduction The Distribution Package has three major components; the payload (CPL and Track File files), the Output Profile List and package metadata (contained in the Packing List and Asset Map). These are all of the elements required for a complete delivery of IMF to a recipient. A Distribution Package can contain a complete feature film composition or a set of compositions. Alternatively, it can carry as little as a single file (e.g., to update one reel s subtitle or soundtrack) Distribution Package General A Distribution Package shall consist of a Packing List, an OPL and one or more IMF CPL s and/or Track Files. The following requirements apply Packing for Transport The distribution method shall allow an IMP to be transported via physical media or network. 111
112 9.3.5 Distribution Volume General A Distribution Volume shall consist of one or more Distribution Packages plus an Asset Map. A Distribution Volume for physical delivery should consist of a single storage media item that contains a single filesystem partition, and should contain the Asset Map in the root directory of that filesystem. A Distribution Volume for network delivery should be identified by the URI of the Asset Map Standards SMPTE ST 429-8:2007 defines a Packing List with optional Digital Signature. SMPTE ST 429-9:2007 defines an Asset Mapping document. Appendix A defines a profile for a Distribution Volume on a path-based filesystem. 112
113 A - Example Workflows Annex The following diagrams show examples of future state workflows using the concept of the IMF. This IMF and the file-based workflow shall enable Mastering & Distribution Servicing to service both existing and emerging distribution channels. It must be stressed that this is only an example workflow. As one becomes more familiar with the concepts of the IMF one can imagine many different workflows using the IMF. It is not the intention of this document to identify all of these possibilities. Figure 24 - Future State - Mastering & Distribution Servicing 113
114 Figure 25 - Example IMF Workflow 114
115 B - Composition PlayList (CPL) Example Composition PlayLists (CPLs) are scripts that link the IMF Track Files together into synchronized pieces of content. CPLs are written in XML. B1 CPL Schema <?xml version="1.0" encoding="utf-8"?> <xs:schema targetnamespace=" xmlns:cpl=" xmlns:dcml=" xmlns:ds=" xmlns:xs=" elementformdefault="qualified" attributeformdefault="unqualified"> <xs:import namespace=" /> <xs:import namespace=" /> <!-- top level elements --> <xs:element name="compositionplaylist" type="cpl:compositionplaylisttype" /> <!-- CompositionPlaylist --> <xs:complextype name="compositionplaylisttype"> <xs:sequence> <xs:element name="id" type="dcml:uuidtype" /> <xs:element name="annotation" type="dcml:usertexttype" minoccurs="0" /> <xs:element name="issuedate" type="xs:datetime" /> <xs:element name="issuer" type="dcml:usertexttype" minoccurs="0" /> <xs:element name="creator" type="dcml:usertexttype" minoccurs="0" /> <xs:element name="contenttitle" type="dcml:usertexttype" /> <xs:element name="contentversionlist" minoccurs="0"> <xs:complextype> 115
116 <xs:sequence> <xs:element name="contentversion" type="cpl:contentversiontype" maxoccurs="unbounded" /> </xs:sequence> </xs:complextype> </xs:element> <xs:element name="contentkind" type="cpl:contentkindtype" minoccurs="0" /> <!-- The structure that follows provides a list of essence descriptor elements that will be referenced (by UUID) from the resources that make use of the respective encoding. For example, if a composition contains both stereo and 5.1 sound, there would be two sound essence descriptors in this section of the CPL, and each of the stereo and 5.1 resource elements would contain a single UUID that references the respective descriptor (see SourceEncoding below). At this time it there is not much consensus as to the contents of the descriptors and it is very likely that there will be an evolving set of parameters. For this reason, the model presented here simply provides the UUID needed for reference from the resource elements and an "any" element that can contain an XML structure defined elsewhere. It is expected that this approach will allow new essence descriptors to be introduced over time without the need for modifying the core CPL rate standard. A very likely source of essence descriptors is the MXF files that are referenced by a CPL document. Since the MXF wrapping will require extensive documentation of essence encoding parameters (in structures not coincidentaly named "Essence Descriptors), it may suffice to simply store an XML transformation of the MXF essence descriptor in this place. See interop-imf-essence.xsd for example essence descriptors (Based on IMF 1.0). --> <xs:element name="essencedescriptorlist"> <xs:complextype> <xs:sequence> 116
117 <xs:element name="essencedescriptor" type="cpl:essencedescriptorbasetype" maxoccurs="unbounded" /> </xs:sequence> </xs:complextype> </xs:element> <!-- The CPL timeline consists of a sequence of edit units that is the concatenation of all of the edit units in all of the Sequence elements. The individual edit units are addressed using an integer value that extends from zero (0), the first edit unit, through n-1, where n is the number of edit units in the CPL timeline (the composition's duration). While exact and very useful, this method of timeline location (index by zero-based integer count) is not widely used for interfacing timelines between devices. It is expected that SMPTE ST 0012 timecode (or its successor) will continue to be used for that purpose. To facilitate predictable timecode output from a CPL timeline, the parameters in this element provide the information necessary to initialize a timecode generator. --> <!-- Defines the SMPTE ST 0012 timecode timeline that corresponds to the entire composition timeline. --> <xs:element name="compositiontimecode" minoccurs="0"> <xs:complextype> <xs:sequence> <!-- true = drop frame, false = non-drop frame --> <xs:element name="timecodedropframe" type="xs:boolean" /> <!-- true = interlace, false = progressive (TODO: is this really needed?) --> <xs:element name="timecodeinterlace" type="xs:boolean" /> <!-- Number of timecode frames per second, i.e., the "frames" field of the time code will extend 117
118 from 0 to TimecodeRate > <xs:element name="timecoderate" type="xs:positiveinteger" /> <!-- The time code at edit unit zero (0) shall be equal to the value presented in TimecodeStartAddress and shall increment by one frame for each subsequent Edit Unit in the composition. --> <xs:element name="timecodestartaddress" type="cpl:timecodetype" /> </xs:sequence> </xs:complextype> </xs:element> <!-- The duration of the composition presented in a convenient notation for direct display, calculated by summing the durations of all of the sequences. Intentionally limited to HH:MM:SS --> <xs:element name="totalrunningtime" minoccurs="0"> <xs:simpletype> <xs:restriction base="xs:string"> <xs:pattern value="[0-9][0-9]:[0-5][0-9]:[0-5][0-9]" /> </xs:restriction> </xs:simpletype> </xs:element> <!-- A Locale is a grouping of region or country codes, rating and optional comment elements that identifies the scope of the intended audience in terms of location, language and censorship authority. A CPL may contain zero or more Locales. --> <xs:element name="localelist" minoccurs="0"> <xs:complextype> <xs:sequence> <xs:element name="locale" type="cpl:localetype" /> </xs:sequence> 118
119 </xs:complextype> </xs:element> <!-- An Edit Unit is the smallest temporal increment of access to Essence, e.g. a frame or a sample. A CPL has a temporal granularity of one Edit Unit. Note that sample-level granularity is available in resource elements that access sound essence. --> <!-- The number of Editable Units to be reproduced during a temporal interval having a duration of exactly one (1.0) second. All essence referenced from this CPL shall have equal EditRate. --> <xs:element name="editrate" type="dcml:rationaltype" /> <!-- The actual playlist begins here. Content is divided into temporal epochs called "Sequences". Each sequence contains one or more segments that comprise the output of the timeline. Segments with identical TrackId values form a "track", or timeline output. There shall be zero or one picture segments in a Sequence. There shall be zero or more sound segments, subtitle segments, caption segments and time code segments (upper limits to be added as a constraint). Sequences are reproduced consecutively, in the order defined in the CPL. --> <xs:element name="sequencelist"> <xs:complextype> <xs:sequence> <xs:element name="sequence" type="cpl:sequencetype" /> </xs:sequence> </xs:complextype> </xs:element> <!-- Digital signature may be used to support authenticity checking. --> <xs:element name="signer" type="ds:keyinfotype" minoccurs="0" /> <xs:element ref="ds:signature" minoccurs="0" /> </xs:sequence> </xs:complextype> 119
120 <!-- A Sequence is a set of segments intended for simultaneous playout. --> <xs:complextype name="sequencetype"> <xs:sequence> <xs:element name="id" type="dcml:uuidtype"/> <xs:element name="annotation" type="dcml:usertexttype" minoccurs="0" /> <xs:element name="sequenceissuedate" type="xs:datetime" minoccurs="0" /> <!-- TODO: really needed? --> <!-- Each segment shall have equal duration, which shall be the sum of the durations of its child Resource elements. The duration of a segment shall be no less than the quotient of the EditRate rounded up to the nearest integer value (i.e., one second). --> <xs:element name="segmentlist"> <xs:complextype> <xs:sequence> <!-- there shall be zero or one picture tracks --> <xs:element name="imagesegment" type="cpl:segmenttype" minoccurs="0" /> <!-- The TrackID element (see the definition of BaseSegmentType below) uniquely identifies a track for those essence types that allow multiple segments (such an AudioSegment). When more than one track of a type is used, the following shall hold true: o No two Tracks in a Sequence shall have the same TrackID o The TrackID shall be present in every segment that is expected to be reproduced on a particular output. OPL discussion: how to select among tracks for output? The TrackID can be used by the OPL to select a particular track when more than one track is present. --> <!-- there shall be zero or more sound tracks --> <xs:element name="audiosegment" type="cpl:segmenttype" minoccurs="0" maxoccurs="unbounded" /> 120
121 <!-- there shall be zero or more subtitle tracks --> <xs:element name="subtitlesegment" type="cpl:segmenttype" minoccurs="0" maxoccurs="unbounded" /> <!-- there shall be zero or more caption tracks --> <xs:element name="captionsegment" type="cpl:segmenttype" minoccurs="0" maxoccurs="unbounded" /> <!-- there shall be zero or more time code tracks --> <xs:element name="timecodesegment" type="cpl:segmenttype" minoccurs="0" maxoccurs="unbounded" /> <!-- there shall be zero or one marker tracks --> <xs:element name="markersegment" type="cpl:segmenttype" minoccurs="0" /> <!-- Additional sequence track elements may be introduced in other standards. The following XML Schema declaration allows any element to be present past this point. The selection of the "lax" value of the processcontents attribute allows older implementations that do not understand the extension to ignore it. Applications that expect a particular extension should use a modified schema (using processcontents="strict") or use means other than schema validation to ensure proper syntax. --> <xs:any namespace="##other" processcontents="lax" minoccurs="0" maxoccurs="unbounded" /> </xs:sequence> </xs:complextype> </xs:element> </xs:sequence> </xs:complextype> <!-- A segment is further divided into "clips", which are selections of external essence to be reproduced in series. Each Resource element in a Track specifies an exact sequence of editable units (a "clip") of external essence to be played. The segment's contents is the concatenation of the clips in the 121
122 order defined in the XML file. --> <!-- Base class for segment types --> <xs:complextype name="segmenttype"> <xs:sequence> <xs:element name="id" type="dcml:uuidtype"/> <xs:element name="trackid" type="dcml:uuidtype" minoccurs="0" /> <xs:element name="resource" type="cpl:baseresourcetype" maxoccurs="unbounded" /> </xs:sequence> </xs:complextype> <xs:complextype name="baseresourcetype" abstract="1"> <xs:sequence> <xs:element name="id" type="dcml:uuidtype" /> <xs:element name="annotation" type="dcml:usertexttype" minoccurs="0" /> <!-- Count of edit units (of the IntrinsicEditRate) to be skipped before start of playback. --> <xs:element name="entrypoint" type="xs:nonnegativeinteger" minoccurs="0" /> <!-- Count of edit units (of the CPL EditRate) to be played. --> <xs:element name="sourceduration" type="xs:positiveinteger" minoccurs="0" /> <!-- Number of times to repeat the section of edit units defined by EntryPoint and SourceDuration. --> <xs:element name="repeatcount" type="xs:positiveinteger" minoccurs="0" /> <!-- Count of edit units contained in external essence package. --> <xs:element name="intrinsicduration" type="xs:positiveinteger" /> <!-- EditRate of the referenced resource. In the case where the referenced resource is a MarkerResource, this value shall be equal to the CPL EditRate. --> <xs:element name="intrinsiceditrate" type="dcml:rationaltype" minoccurs="0" /> 122
123 </xs:sequence> </xs:complextype> <!-- A resource is a chunk of essence that can be played (reproduced). It is defined more precicely as a sequence of editable units of essence, where the essence comprising the editable unit(s) is contained in a track file external to the CPL. (The track file will be an AS-02 MXF file). --> <xs:complextype name="trackfileresourcetype"> <xs:complexcontent> <xs:extension base="cpl:baseresourcetype"> <xs:sequence> <xs:element name="sourceencoding" type="dcml:uuidtype" /> <!-- Identifies external essence to be reproduced. The value shall be the PackageUID extracted from the MXF file containing the essence to be reproduced. --> <xs:element name="assetid" type="dcml:uuidtype" /> <!-- If the MXF track file is encrypted, these elements provide additional information. --> <xs:element name="keyid" type="dcml:uuidtype" minoccurs="0" /> <xs:element name="hash" type="xs:base64binary" minoccurs="0" /> </xs:sequence> </xs:extension> </xs:complexcontent> </xs:complextype> <xs:complextype name="stereoimagetrackfileresourcetype"> <xs:complexcontent> 123
124 <xs:extension base="cpl:baseresourcetype"> <xs:sequence> <xs:element name="lefteye" type="cpl:trackfileresourcetype" /> <xs:element name="righteye" type="cpl:trackfileresourcetype" /> </xs:sequence> </xs:extension> </xs:complexcontent> </xs:complextype> <xs:complextype name="markerresourcetype"> <xs:complexcontent> <xs:extension base="cpl:baseresourcetype"> <xs:sequence> <xs:element name="marker" type="cpl:markertype" maxoccurs="unbounded" /> </xs:sequence> </xs:extension> </xs:complexcontent> </xs:complextype> <!-- Marker (a'la ST 429-7) --> <xs:complextype name="markertype"> <xs:sequence> <xs:element name="id" type="dcml:uuidtype" /> <xs:element name="annotation" type="dcml:usertexttype" minoccurs="0" /> <xs:element name="label"> <xs:complextype> <xs:simplecontent> <xs:extension base="xs:string"> <xs:attribute name="scope" type="xs:anyuri" use="optional" default=" /> 124
125 </xs:extension> </xs:simplecontent> </xs:complextype> </xs:element> <xs:element name="offset"> <xs:simpletype> <xs:restriction base="xs:nonnegativeinteger"> <xs:mininclusive value="0" /> </xs:restriction> </xs:simpletype> </xs:element> </xs:sequence> </xs:complextype> <!-- ContentVersion (a'la ST 429-7, with expansion option) --> <xs:complextype name="contentversiontype"> <xs:sequence> <xs:element name="id" type="xs:anyuri" /> <xs:element name="label" type="dcml:usertexttype" /> <xs:any namespace="##other" processcontents="lax" minoccurs="0" /> </xs:sequence> </xs:complextype> <!-- ContentKind (a'la ST 429-7) --> <xs:complextype name="contentkindtype"> <xs:simplecontent> <xs:extension base="xs:string"> <xs:attribute name="scope" type="xs:anyuri" use="optional" default=" /> 125
126 </xs:extension> </xs:simplecontent> </xs:complextype> <!-- Essence Descriptor anchor type. See description of EssenceDescriptorList above. --> <xs:complextype name="essencedescriptorbasetype"> <xs:sequence> <xs:element name="id" type="dcml:uuidtype" /> <xs:any namespace="##other" processcontents="strict" minoccurs="0" /> </xs:sequence> </xs:complextype> <!-- SMPTE ST 0012 timecode type --> <xs:simpletype name="timecodetype"> <xs:restriction base="xs:string"> <xs:pattern value="[0-2][0-9]:[0-5][0-9]:[0-5][0-9]:[0-5][0-9]" /> </xs:restriction> </xs:simpletype> <!-- A Locale is a grouping of region or country codes, rating and optional comment elements that identifies the scope of the intended audience in terms of location, language and censorship authority. --> <xs:complextype name="localetype"> <xs:sequence> <xs:element name="annotation" type="dcml:usertexttype" minoccurs="0" /> <xs:element name="ratinglist" minoccurs="0"> <xs:complextype> <xs:sequence> <xs:element name="rating" maxoccurs="unbounded"> <!-- Rating (a'la ST 429-7) --> <xs:complextype> 126
127 <xs:sequence> <xs:element name="agency" type="xs:anyuri" /> <xs:element name="label" type="xs:string" /> </xs:sequence> </xs:complextype> </xs:element> </xs:sequence> </xs:complextype> </xs:element> <xs:element name="languagelist" minoccurs="0"> <xs:complextype> <xs:sequence> <xs:element name="language" type="xs:string" maxoccurs="unbounded" /> <!-- RFC 5646 language code --> </xs:sequence> </xs:complextype> </xs:element> <xs:choice> <xs:element name="countrylist" minoccurs="0"> <xs:complextype> <xs:sequence> <xs:element name="country" type="xs:string" maxoccurs="unbounded" /> <!-- ISO 639 country code - -> </xs:sequence> </xs:complextype> </xs:element> <xs:element name="regionlist" minoccurs="0"> <xs:complextype> <xs:sequence> 127
128 <xs:element name="region" type="xs:string" maxoccurs="unbounded" /> <!-- unconstrained --> </xs:sequence> </xs:complextype> </xs:element> </xs:choice> </xs:sequence> </xs:complextype> </xs:schema> <!-- the end --> 128
129 B2 Composition PlayList (CPL) Example <?xml version="1.0" encoding="utf-8"?> <CompositionPlaylist xmlns=" xmlns:ed=" xmlns:xsi=" <Id>urn:uuid:e38e945b-8c a020-c2c4a0c72da5</Id> <Annotation>An Example CPL</Annotation> <IssueDate> T21:35:13+00:00</IssueDate> <Creator>jhurst+emacs</Creator> <ContentTitle>Perfect Movie IMF</ContentTitle> <EssenceDescriptorList> <EssenceDescriptor> <Id>urn:uuid:9a57649b-5f b-56f25a67a01b</Id> <ed:imageessencedescriptor> <ed:containerheight>1080</ed:containerheight> <ed:containerwidth>1920</ed:containerwidth> <ed:colorsampling>4:4:4</ed:colorsampling> <ed:colorencoding>rgb</ed:colorencoding> <ed:pixelaspectratio>1 1</ed:PixelAspectRatio> </ed:imageessencedescriptor> </EssenceDescriptor> <EssenceDescriptor> <Id>urn:uuid:604b3683-e7dd-4c01-8a10-7f98163d6a99</Id> <ed:audioessencedescriptor> <ed:samplerate> </ed:SampleRate> <ed:audiosamplesperframe>2000</ed:audiosamplesperframe> <!-- equals resource IntrinsicEditRate / EditRate --> 129
130 <ed:channelcount>2</ed:channelcount> </ed:audioessencedescriptor> </EssenceDescriptor> </EssenceDescriptorList> <TotalRunningTime>00:01:00</TotalRunningTime> <LocaleList> <Locale> <Annotation>US English, North America</Annotation> <RatingList> <Rating> <Agency> <Label>PG-13</Label> </Rating> </RatingList> <LanguageList> <Language>en</Language> </LanguageList> <CountryList> <Country>us</Country> <Country>cn</Country> </CountryList> </Locale> </LocaleList> <!-- the playlist starts here --> <EditRate>24 1</EditRate> <SequenceList> <Sequence> <Id>urn:uuid:dfd1fab8-0eec-4414-bcc3-e2ecba48f962</Id> 130
131 <SegmentList> <ImageSegment> <Id>urn:uuid:bbac696d-f6c9-40de-bd56-e39e7fa5ed7e</Id> <Resource xsi:type="trackfileresourcetype"> <Id>urn:uuid:2204b5a2-70c7-4c94-83f0-c2c78c1053a1</Id> <IntrinsicDuration>2880</IntrinsicDuration> <IntrinsicEditRate>24 1</IntrinsicEditRate> <SourceEncoding>urn:uuid:9a57649b-5f b-56f25a67a01b</SourceEncoding> <!-- EssenceDescriptor reference --> <AssetId>urn:uuid:fb45d31e-dcae-47ad-899b-2e5e1022f19c</AssetId> <!-- external MXF file reference --> </Resource> </ImageSegment> <AudioSegment> <Id>urn:uuid:6d603ae2-b8af-4709-a5f3-8720f96395e1</Id> <Resource xsi:type="trackfileresourcetype"> <Id>urn:uuid:0b8e8fce-c7cc-4e22-a138-cdb31e3a37da</Id> <IntrinsicDuration>2800</IntrinsicDuration> <IntrinsicEditRate> </IntrinsicEditRate> <SourceEncoding>urn:uuid:604b3683-e7dd-4c01-8a10-7f98163d6a99</SourceEncoding> <!-- EssenceDescriptor reference --> <AssetId>urn:uuid:83de70a5-6c beb-113fdaaf3ae8</AssetId> <!-- external MXF file reference --> </Resource> </AudioSegment> <MarkerSegment> <Id>urn:uuid:1e66b1e4-f54c-40fe-8b51-f8210d17ade1</Id> <Resource xsi:type="markerresourcetype"> <Id>urn:uuid:a1be0e42-f551-4a05-a4fa-2c1c660c52b3</Id> <IntrinsicDuration>2800</IntrinsicDuration> 131
132 <Marker> <Id>urn:uuid:1b192cee-cc18-45fb-ab1e-c331b5cc940a</Id> <Label>FFOA</Label> <Offset>100</Offset> </Marker> <Marker> <Id>urn:uuid:7dc a8-42b3-9ad1-0ce7c82e8851</Id> <Label>LFOA</Label> <Offset>2700</Offset> </Marker> </Resource> </MarkerSegment> </SegmentList> </Sequence> </SequenceList> </CompositionPlaylist> 132
133 C - Output Profile List XML Examples The following examples illustrate various types of OPL: C1 - OPL Outline in pseudo-xml (Extended Level) The following is an outline of the OPL in pseudo-xml to illustrate some of the components: # begin pseudo-xml # <OutputProfilelist> # Below specifies Preamble, Note the reference to the CPL, compositionplaylistreference <Id>urn:uuid:cde69c7b-a f5-e8ffea82f345</Id> <AnnotationText> Avatar_FTR_S_EN-XX_US_PG13_51_2K_DI_ _PX </AnnotationText> <IssueDate> T03:21:28-07:00</IssueDate> <Issuer></Issuer> <Creator></Creator> <ContentTitleText></ContentTitleText> <ContentKind></ContentKind> <CompositionPlayListReference> <ID>urn:uuid:bbf69c7b-a f5-e8ffea123fa1</ID> </CompositionPlayListReference> <OutputText>US English Mbps Master</OutputText> <Language>EN</Language> <Country>US</Country> # Below specifies desired Output Format for Video/Image <ImageOutputFormat/> # Below specifies desired Output Format for Audio <AudioOutputFormat/> # Below specifies the desired Encoding/Transcoding <EncodingFormat> <StandardsBody /> <Label /> <Preferred Encoder/> <GenericEncoder/> </ EncodingFormat > # Below specifies the desired ColorSpaceTransformation <ColorTransforms> <OutputColorSpace/> < Preferred Conversion/> #; 3d Lut data? </PreferredConversion> <GenericConversion/> </ ColorTransforms > # Below specifies the desired Pre-Process Parameters <PreProcessOps> <OverlayParameters/> <HeadTransition/> <TailTransition/> </PreProcessOps> </ OutputProfilelist > # end of pseudo-xml # C2 - A Simple OPL (Basic Level) The following is the minimal OPL that simply calls out a reference to a composition playlist. It is up to the facility to apply any of the transformations desired to generate any desired output. 133
134 - <OutputProfileList> # Below specifies Preamble, Note the reference to the CPL, compositionplaylistreference <Id>urn:uuid:cde69c7b-a f5-e8ffea82f345</Id> <AnnotationText>Avatar_FTR_S_EN-XX_US_PG13_51_2K_DI_ _PX</AnnotationText> <IssueDate> T03:21:28-07:00</IssueDate> <Issuer /> <Creator /> <ContentTitleText /> <ContentKind /> - <CompositionPlayListReference> <ID>urn:uuid:bbf69c7b-a f5-e8ffea123fa1</ID> 134
135 C3 - A Complex (level1) OPL (Extended Level) The following is the first level of complexity for an OPL. It calls out the desired Output format from an Image and Audio format. It is up to the post facility to determine the best way to deliver the desired output. - <OutputProfileList> # Below specifies Preamble, Note the reference to the CPL, compositionplaylistreference <Id>urn:uuid:cde69c7b-a f5-e8ffea82f345</Id> <AnnotationText>Avatar_FTR_S_EN-XX_US_PG13_51_2K_DI_ _PX</AnnotationText> <IssueDate> T03:21:28-07:00</IssueDate> <Issuer /> <Creator /> <ContentTitleText /> <ContentKind /> - <CompositionPlayListReference> <ID>urn:uuid:bbf69c7b-a f5-e8ffea123fa1</ID> </CompositionPlayListReference> <OutputText>US English Mbps Master</OutputText> <Country>US</Country> - <ImageOutputFormat> - <BitRate> - <ConstantBitRate> <Value>50</Value> <Label>Mbps</Label> </ConstantBitRate> <AverageBitRate /> <MaxBitRate /> <MinBitRate /> </BitRate> <BitDepth>8</BitDepth> - <ColorEncoding> <ColorSpace>Rec-709</ColorSpace> <ChromaFormat>4:2:2</ChromaFormat> </ColorEncoding> - <CompressionStandard> <StandardsBody>MPEG-LA</StandardsBody> - <CompressionType> <Label>MPEG2</Label> <PictureCoding>I-FRAME ONLY</PictureCoding> <ProfileType>HIGH_PROFILE</ProfileType> <LevelType>HIGH_LEVEL</LevelType> </CompressionType> </CompressionStandard> - <SpatialParameters> - <CanvasCoordinates> <x1>0</x1> <y1>0</y1> <x2>1919</x2> <y2>1079</y2> </CanvasCoordinates> + <Scale> - <Crop> <x1>10</x1> <y1>0</y1> <x2>1010</x2> <y2>1079</y2> </Crop> </SpatialParameters> <FrameRate>24</FrameRate> </ImageOutputFormat>...continued on the next page 135
136 A Complex (level1) OPL (continued from the previous page) # Below specifies desired Output Format for Audio - <AudioOutputFormat> <AudioConfig /> <SamplingFreq>48000</SamplingFreq> <BitDepth>16</BitDepth> - <CompressionStandard> <StandardsBody /> <Label /> </CompressionStandard> </AudioOutputFormat> </OutputProfileList> C4 - A Complex (level2) OPL (Extended Level) The following example illustrates another level of complexity for an OPL, creating an NTSC from an HD IMP: - <OutputProfilelist> # Below specifies Preamble, Note the reference to the CPL, compositionplaylistreference <Id>urn:uuid:abc69c7b-a f5-a8ffea82f345</Id> <AnnotationText>Ratatouille_TH_FEA_DPX-V_1080P23_ENG-ENG_ </AnnotationText> <IssueDate> T03:21:28-07:00</IssueDate> <Issuer>Disney</Issuer> <Creator>Pixar</Creator> <ContentTitleText /> <ContentKind /> - <CompositionPlayListReference> <ID>urn:uuid:aaf69c7b-a f5-e8ffea123fa1</ID> </CompositionPlayListReference> <OutputText>US English 2.39 DPX Master</OutputText> <TotalRunningTime>01:51:15:10</TotalRunningTime> <Language>EN</Language> <Country>US</Country> # Below specifies desired Output Format for Video/Image, but not the steps to get there - <ImageOutputFormat> - <ColorEncoding> <StandardsBody>ITU</StandardsBody> <ColorSpace>Rec-601</ColorSpace> <ChromaFormat>4:2:2</ChromaFormat> <ChromaEncoding>YCBCR</ChromaEncoding> <BitDepth>10</BitDepth> <TransferFunction>Linear</TransferFunction> <CodeRange>Limited</CodeRange> # </ColorEncoding> - <CompressionStandard> <CompressionType>Uncompressed</CompressionType> <Label>None</Label> </CompressionStandard> - <SpatialParameters> - <CanvasCoordinates> <x1>0</x1> <y1>0</y1> <x2>719</x2> <y2>485</y2> </CanvasCoordinates> 136
137 continuation of a Complex (Level 2) OPL: + <ActiveCoordinates> <PixelAspectRatio>1.21</PixelAspectRatio> </SpatialParameters> - <FrameRate> <Rate>59.94</Rate> <Raster>I</Raster> # Interlaced <TimecodeType>DF</TimecodeType> # Drop-Frame </FrameRate> </ImageOutputFormat> # Below specifies desired Output Format for Audio - <AudioOutputFormat> <SampleRate>48000</SampleRate> <BitDepth>24</BitDepth> <SamplesPerFrame>800.8</SamplesPerFrame> - <CompressionStandard> <CompressionType>LPCM</CompressionType> <Label>PCM</Label> </CompressionStandard> - <ChannelLayout> - <Channel01> <Language>English</Language> <Config>2.0</Config> <Channel>L</Channel> </Channel01> - <Channel02> <Language>English</Language> <Config>2.0</Config> <Channel>R</Channel> </Channel02> - <Channel03> <Language>ME</Language> <Config>2.0</Config> <Channel>L</Channel> </Channel03> - <Channel04> <Language>ME</Language> <Config>2.0</Config> <Channel>R</Channel> </Channel04> </ChannelLayout> <PitchCorrection>No</PitchCorrection> </AudioOutputFormat> # Below specifies the desired Pre-Process Parameters - <PreProcessOps> - <Process01> <Label>Resize</Label> - <Scale> <Filter>Lanczos</Filter> <FilterSetting01>3-Lobe</FilterSetting01> <HSize>720</HSize> <VSize>486</VSize> </Scale> </Process01> - <Process02> 137
138 continuation of a Complex (Level 2) OPL: <Label>Color Space Conversion</Label> - <ColorTransforms> - <ColorEncoding> <StandardsBody>ITU</StandardsBody> <ColorSpace>Rec-601</ColorSpace> <ChromaFormat>4:4:4</ChromaFormat> <ChromaEncoding>YCBCR</ChromaEncoding> <BitDepth>10</BitDepth> <TransferFunction>Linear</TransferFunction> <CodeRange>Limited</CodeRange> # </ColorEncoding> - <ColorSampling> <Filter>Mean</Filter> <ChromaFormat>4:2:2</ChromaFormat> </ColorSampling> </ColorTransforms> </Process02> - <Process03> <Label>Timecode Change</Label> - <TimecodeOutput> <TCRate>59.94</TCRate> <TCRaster>I</TCRaster> <TCType>DF</TCType> - <TCChange> <Pulldown>Yes</Pulldown> #Use 3:2 Pulldown <AFrame>01:00:00:00</AFrame> #Location of A Frame </TCChange> </TimecodeOutput> 138
139 C5 - A Pre-processing section for an OPL (Extended Level) The following example illustrates how one should include pre-processing parameters in an OPL: <PreProcessOps> <OverlayParameters> <OverlayItem> <OverlayType>image-alpha</OverlayType> <OverlaySourceID>urn:uuid:ca df-8a c9a66</OverlaySourceID> #note: could be single image or an image sequence (must have alpha) <OverlaySourceColorSpace>rec709</OverlaySourceColorSpace> <TextColor></TextColor> <TextOutlineColor></TextOutlineColor> <size>500x300</size> <scale> <xscale>1.0</xscale> <yscale>1.0</yscale> </scale> <Position> #note: assuming upper-left of image (or image sequence) is origin. <x>1700</x> <y>550</y> </Position> <Opacity>50</Opacity> <CompositeMethod>blend</CompositeMethod> <StartFrame>0</StartFrame > <EndFrame>100101</EndFrame> </OverlayItem> <OverlayItem> <OverlayType>text</OverlayType> <OverlaySourceID></OverlaySourceID> <OverlaySourceColorSpace>rec709</OverlaySourceColorSpace> <TextItem> <Annotation>Property of Warner Bros.</Annotation> <Color>white (or #FFFFFFF)</Color> <OutlineColor>black (or #000000)</OutlineColor> <Font>arial</Font> <Justification>center</Justification> </TextItem> <size>20</size> <scale> <xscale>1.0</xscale> <yscale>1.0</yscale> </scale> <Position> #note: assuming upper-left of image (or image sequence) is origin. <x>960</x> <y>1050</y> </Position> <Opacity>100</Opacity> <CompositeMethod>over</CompositeMethod> <StartFrame>0</StartFrame > <EndFrame>10000</EndFrame> <OverlayItem> <OverlayType>text</OverlayType> <OverlaySourceID></OverlaySourceID> <OverlaySourceColorSpace>graphic</OverlaySourceColorSpace> <TextItem> <Annotation>Property of Warner Bros.</Annotation> <Color>white (or #FFFFFFF)</Color> <OutlineColor>black (or #000000)</OutlineColor> <Font>arial</Font> <Justification>left</Justification> </TextItem> <size>20</size> <scale>1.0</scale> <Position> #note: assuming upper-left of image (or image sequence) is origin. <x>160</x> 139
140 <y>1000</y> </Position> <Opacity>100</Opacity> <CompositeMethod>over</CompositeMethod> <StartFrame>20000</StartFrame > <EndFrame>30000</EndFrame> </OverlayItem> </OverlayParameters> <Transitions> <TransitionItem> <Type>fadeup</Type > <Duration>48</Duration> <StartFrame>11</StartFrame > <StartASideOpacity>0</StartASideOpacity > <StartBSideOpacity>0</StartBSideOpacity > <EndFrame>58</EndFrame > <EndASideOpacity>100</EndASideOpacity > <EndBSideOpacity>0</EndBSideOpacity > </TransitionItem> <TransitionItem> <Type>dissolve</Type > <Duration>48</Duration> <StartFrame>100</StartFrame > <StartASideOpacity>0</StartASideOpacity > <StartBSideOpacity>100</StartBSideOpacity > <EndFrame>100101</EndFrame > <EndASideOpacity>100</EndASideOpacity > <EndBSideOpacity>0</EndBSideOpacity > </TransitionItem> </Transitions> </PreProcessOps> 140
141 GLOSSARY OF TERMS Table 22: Glossary of Terms Term AES AES ANSI API Broadcast Wave Burned-In Captions CBC CBR Description Acronym for Advanced Encryption Standard. Acronym for Audio Engineering Society. Acronym for American National Standards Institute. Acronym for Application Programming Interface. Digital Audio file format developed and standardized by the EBU; recommendation ITU-R BR (2007). Where visual data that is normally supplemental to a motion picture is irrevocably added to the motion-picture image by compositing the data with the underlying image. Text that is a representation, often in the same language, of dialog and audio events occurring during scenes of a motion picture. (Generally associated with a dialog and audio event translation for the deaf and hard of hearing.) Acronym for Cipher Block Chaining mode. Acronym for Constant Bit Rate for image compression. CIE Acronym for International Commission on Illumination (Commission Internationale de l Eclairage). Closed Composition CPL DCP Distribution Package DM Referring to visual data that is supplemental to a motion picture being displayed off-screen. A motion picture, or a trailer, or an advertisement, etc. Composition consists of a Metadata Composition PlayList along with the Essence and other Metadata track files that define the work. A complete artistic or informational motion picture work, such as a feature, episode, trailer, or an advertisement, etc. Acronym for Composition PlayList, the definitive PlayList for specifying how a Composition is played and what track files are required. Acronym for a Digital Cinema Package, the set of files that are the result of the encoding, encryption and packaging process. The collection of files delivered by the distributor to the exhibitor. A Distribution Package may contain pieces of a Composition or several compositions, a complete Composition, replacement/update files, etc. Acronym for Descriptive Metadata. 141
142 Term DRM DSM DVD e.g. EBU Edit Rate Editable Unit Essence ETC FIPS FM Forensic Marking FPS Frame Rate HD HI HMAC Hz i.e. IEC IMF Description Acronym for Digital Rights Management. Acronym for Digital Source Master, a digital master created in post-production from which different versions and duplication masters may be created. Acronym for Digital Versatile Disc. Abbreviation for the Latin phrase exempli gratia, meaning for example. Acronym for European Broadcast Union (a standardization organization). A number of Editable Units to be reproduced during a temporal interval having a duration of exactly one (1.0) second. Because Edit Rate values are not always integer values and sometimes require many digits of precision, Edit Rate values are expressed as a rational number (the ratio of two integers). The smallest temporal increment of access to Essence, e.g. a frame or a sample. Image, audio, subtitles, or any content that is presented to a human being in a presentation. The audio, image and data resources that ultimately are intended for a viewing and/or listening experience. Acronym for Entertainment Technology Center. Acronym for Federal Information Processing Standards. Acronym for Forensic Marking. Data embedded in essence to provide forensic tracking information in the event of content theft. Such marking can be visible or non-visible, audible or nonaudible. Acronym for Frames per Second. The number of frames per second. Frame Rate values are expressed as a rational number (the ratio of two integers). Acronym for High Definition. Acronym for Hearing Impaired. Acronym for Hashing Message Authentication Codes. Abbreviation for Hertz, a unit of frequency expressed in cycles per second. Abbreviation for the Latin phrase id est, meaning that is. Acronym for International Electrotechnical Commission. Acronym for Interoperable Master Format. 142
143 Term Intrinsic Duration IP ISO ITU JPEG Key Key Epoch khz KLV Localizations LTC LUT Main Titles Metadata MTBF MXF Native End Point Native Start Point NDF Description The total number of Editable Units in a Track File. Acronym for Intellectual Property. Acronym for International Organization for Standardization. Acronym for International Telecommunications Union. Acronym for Joint Photographic Experts Group, the international body that developed the JPEG 2000 standard. Electronic data used to allow data encryption and decryption. The period of time during which a given decryption key is valid. The key epoch defines a minimum practical time period for use of encrypted track files. Acronym for kilo Hertz, one thousand cycles per second, a measure of frequency. Acronym for Key Length Value used by the MXF to parse binary data. Text on screen representing either non-source language dialog or information pertinent to the story such as time and place. This is specifically the text that is absent in text-less masters. This text is localized or translated for various markets either through subtitles or entire image replacement. Acronym for Linear Time Code. Acronym for Look Up Table. A credit sequence generally shown near the beginning of a motion picture. Data about data or data describing other data. Information that is considered ancillary to or otherwise directly complementary to essence. Information that is useful or of value when associated with the essence being provided. Acronym for Mean Time Between Failure. Acronym for Material exchange Format. The last Editable Unit of a Track File. The first Editable Unit of a Track File. All Track Files are viewed by a Composition Playlist as a sequence of Editable Units numbered from 0 (zero). Consequently, the Editable Unit number of the Native Start Point of a Track File shall always be 0 (zero). Acronym for Non Drop Frame (time code). 143
144 Term NTSC Open Operational Pattern Packing List PAL Playable Region PlayList PNG QC RAND Description Acronym for National Television System Committee, which developed the NTSC television broadcasting standard. Referring to visual data that is supplemental to a motion picture being displayed on-screen. An MXF construct to define file structures. A list describing the files and providing a means for authentication of the files as delivered in a package. Acronym for Phase Alternation by Line, a television broadcasting standard. The set of Editable Units within a Track File that is intended to be reproduced. A Track File may contain Editable Units before and/or after the Playable Region. Conceptually, the format and structure of the various lists used to define the playback of content. Acronym for Portable Network Graphics, an extensible file format for the lossless, portable, well-compressed storage of raster images defined by the PNG Development Group. Acronym for Quality Control. Acronym for reasonable and non-discriminatory. Reel A conceptual period of time having a specific duration of generally 10 to 20 minutes. Used primarily in feature film production. Renewable Replaceable Resource Sample Rate A software component is renewable if it can be remotely, smoothly and possibly automatically upgraded or replaced without significantly disturbing system operations. A system shutdown and normal restart is acceptable, provided that after the restart, the system can be operated as before. A component is said to be replaceable if it can be upgraded or replaced without significantly disturbing system operations. A system shutdown and restart is acceptable, provided that after the replacement, the system can be operated as before. Associates metadata or essence with a playable portion of the Composition Playlist timeline. A Resource may reference a Playable Region within a Track File. The number of essence samples per second. Sample Rate values are expressed as a rational number (the ratio of two integers). 144
145 Term SD Segment Sequence Description Acronym for Standard Definition. An ordered collection of Resources to be reproduced sequentially. A collection of Segments intended to be reproduced in parallel. SHA1 Acronym for Secure Hashing Algorithm 1. SMPTE Subpicture Subtitle TCP/IP TDES 3DES Track File UDP UL or Acronym for Society of Motion Picture and Television Engineers. A multiple-image file format for the transport of visual data supplemental to a motion picture that is intended only for graphic overlay with the main image output of a digital projector. Text that is a representation, in a different language, of dialog occurring during scenes of a motion picture. Generally associated with dialog translation for localization of a motion picture in a particular territory. Acronym for Transmission Control Protocol / Internet Protocol. Acronym for Triple Data Encryption Standard. TDES or 3DES was adopted as a federal standard in 1998 [FIPS (46-3) and ANSI standard X9.32]. The smallest element of a package that can be managed or replaced as a distinct asset. A track file may contain Essence and/or Metadata, and its duration matches an associated Reel. A file containing a single Essence, such as audio, image or subtitle essence. Acronym for User Datagram Protocol. Acronym for Universal Label used in MXF. Unicode The Universal Multiple-Octet Coded Character set, the [ISO/IEC 10646:2003] standard that defines a single code for representation, interchange, processing, storage, entry and presentation of the written form of the world s major languages. urn USB UTC UUID VBR VFX Acronym for uniform resource name. Acronym for Universal Serial Bus, standardized serial communications connection found on computers. Acronym for Universal Coordinated Time. Acronym for Universal Unique IDentifier. Acronym for Variable Bit Rate. Acronym for Visual Effects. 145
146 Term VI Visually Lossless VOD VPN W3C XML Description Acronym for Visually Impaired. An image compression method is considered visually lossless when the processed image is indistinguishable from the unprocessed image under normal theatrical viewing conditions. Acronym for Video on Demand. Acronym for Virtual Private Network. Acronym for The World Wide Web Consortium, the organization responsible for the development of Internet protocols. Acronym for extensible Markup Language. 146
Annie Chang VP, Post-Production Technology The Walt Disney Studios
Annie Chang VP, Post-Production Technology The Walt Disney Studios Annie Chang VP, Post-Production Technology The Walt Disney Studios What s the Problem? What is IMF and what s in it? SMPTE Standards Update
This is a revised version of an article which first appeared in AMIA Tech Review Volume 2, October 2010 2
Digital Cinema Technologies from the Archive s Perspective by Arne Nowak, Dep. Moving Picture Technologies 1 Fraunhofer Institute for Integrated Circuits, Germany This is a revised version of an article
The EDCINE Project Enhanced Digital Cinema
The EDCINE Project for Archives: A System Architecture for Digital Film Archives using JPEG 2000 and MXF Fraunhofer Institute for Integrated Circuits IIS Dr. Siegfried Foessel, Arne Nowak 1 The EDCINE
TECHNICAL OPERATING SPECIFICATIONS
TECHNICAL OPERATING SPECIFICATIONS For Local Independent Program Submission September 2011 1. SCOPE AND PURPOSE This TOS provides standards for producing programs of a consistently high technical quality
Avid DNxHD Technology
www.avid.com Table of Contents Introduction...1 The benefits of Avid DNxHD encoding...1 Avid DNxHD encoding quality...2 Avid DNxHD encoding is open...3 Avid DNxHD encoding: what is high efficiency?...4
JPEG2000 in Moving Image Archiving
JPEG2000 in Moving Image Archiving The Library of Congress Packard Campus National Audio Visual Conservation Center James Snyder Culpeper, Virginia Senior Systems Administrator, NAVCC Our Mission: Digitize
ATSC Standard: 3D-TV Terrestrial Broadcasting, Part 2 Service Compatible Hybrid Coding Using Real-Time Delivery
ATSC Standard: 3D-TV Terrestrial Broadcasting, Part 2 Service Compatible Hybrid Coding Using Real-Time Delivery Doc. A/104 Part 2 26 December 2012 Advanced Television Systems Committee 1776 K Street, N.W.
High Definition (HD) Image Formats for Television Production
EBU TECH 3299 High Definition (HD) Image Formats for Television Production Status: Specification Geneva January 2010 1 Page intentionally left blank. This document is paginated for two sided printing Tech
ENGINEERING COMMITTEE Digital Video Subcommittee SCTE 20 2012 METHODS FOR CARRIAGE OF CEA-608 CLOSED CAPTIONS AND NON-REAL TIME SAMPLED VIDEO
ENGINEERING COMMITTEE Digital Video Subcommittee SCTE 20 2012 METHODS FOR CARRIAGE OF CEA-608 CLOSED CAPTIONS AND NON-REAL TIME SAMPLED VIDEO NOTICE The Society of Cable Telecommunications Engineers (SCTE)
How To Deliver Ultra High Definition To Broadcast On A Tv Or Tv (Uhd)
TECHNICAL STANDARDS SUPPLEMENT FOR DELIVERY OF UHD PROGRAMMES TO BROADCASTER LOGO Version 1.0 This document is a guide to the common technical standard for the delivery of UHD Programmes to BBC, ITV, Channel
4K End-to-End. Hugo GAGGIONI. Yasuhiko MIKAMI. Senior Manager Product Planning B2B Solution Business Group
4K End-to-End HPA Technology Retreat 2010 Yasuhiko MIKAMI Hugo GAGGIONI Senior Manager Product Planning B2B Solution Business Group Sony Corporation CTO Broadcast & Professional Division Sony Electronics
FAQs. Getting started with the industry s most advanced compression technology. when it counts
FAQs Getting started with the industry s most advanced compression technology when it counts AVC-Intra Frequently Asked Questions 1. What is AVC-Intra? AVC-Intra, the industry s most advanced compression
Best practices for producing quality digital video files
University of Michigan Deep Blue deepblue.lib.umich.edu 2011-03-09 Best practices for producing quality digital video files Formats Group, Deep Blue http://hdl.handle.net/2027.42/83222 Best practices for
A Proposal for OpenEXR Color Management
A Proposal for OpenEXR Color Management Florian Kainz, Industrial Light & Magic Revision 5, 08/05/2004 Abstract We propose a practical color management scheme for the OpenEXR image file format as used
Digital Messages GUIDELINES
A NEW era of Digital Advertising 2015 Digital Messages GUIDELINES DIGITAL BILLBOARD CONTENTS GUIDELINES & TIPS Introdution 01 Intro Maximize the Potential Text, graphics and backgrounds will need good
NAPCS Product List for NAICS 51219: Post Production Services and Other Motion Picture and Video Industries
National 51219 1 Postproduction Providing computerized and electronic image and sound processing (film, video, digital media, etc.). Includes editing, transfer, color correction, digital restoration, visual
OpenEXR Image Viewing Software
OpenEXR Image Viewing Software Florian Kainz, Industrial Light & Magic updated 07/26/2007 This document describes two OpenEXR image viewing software programs, exrdisplay and playexr. It briefly explains
Create an itunes Store Package with Compressor. Quick Start Guide July 2015
Create an itunes Store Package Quick Start Guide July 2015 2 Contents 3 Overview 4 What Is an itunes Store Package? Create a New Package Using Compressor Configure Assets in the itunes Store Package Create
Workflow Comparison: Time Tailor versus Non-Linear Editing Systems WHITE PAPER
Workflow Comparison: Time Tailor versus Non-Linear Editing Systems WHITE PAPER Prime Image Page 1 October 2014 Copyright 2014 Prime Image 200 Highpoint Drive Suite 215 Chalfont, PA 18914 All rights reserved.
MXF for Program Contribution, AS-11 AMWA White Paper
for Program Contribution, AS-11 AMWA White Paper By Ian Wimsett, Red Bee Media for Program Contribution, AS-11, is a specification that describes a file format for the delivery of finished programming
SUBMISSION SPECIFICATIONS
SUBMISSION SPECIFICATIONS VERSION 4.0 MARCH 2014 PAGE 2: GENERAL SUBMISSION GUIDELINES / DELIVERY INFORMATION PAGE 3: MUSIC VIDEO CONTENT SPECIFICATIONS PAGE 4: FILM AND TELEVISION CONTENT SPECIFICATIONS
Multichannel stereophonic sound system with and without accompanying picture
Recommendation ITU-R BS.775-2 (07/2006) Multichannel stereophonic sound system with and without accompanying picture BS Series Broadcasting service (sound) ii Rec. ITU-R BS.775-2 Foreword The role of the
Understanding HD: Frame Rates, Color & Compression
Understanding HD: Frame Rates, Color & Compression HD Format Breakdown An HD Format Describes (in no particular order) Resolution Frame Rate Bit Rate Color Space Bit Depth Color Model / Color Gamut Color
AAF. Improving the workflows. Abbreviations. Advanced Authoring Format. Brad Gilmer AAF Association
AAF the Advanced Authoring Format Brad Gilmer AAF Association The Advanced Authoring Format (AAF) enables content creators to easily exchange digital media essence and metadata across platforms, and between
Research & Development. White Paper WHP 241. A Guide to Understanding BBC Archive MXF Files BRITISH BROADCASTING CORPORATION.
Research & Development White Paper WHP 241 April 2013 A Guide to Understanding BBC Archive MXF Files M. Glanville and T. Heritage BRITISH BROADCASTING CORPORATION BBC Research & Development White Paper
Video compression: Performance of available codec software
Video compression: Performance of available codec software Introduction. Digital Video A digital video is a collection of images presented sequentially to produce the effect of continuous motion. It takes
Common 16:9 or 4:3 aspect ratio digital television reference test pattern
Recommendation ITU-R BT.1729 (2005) Common 16:9 or 4:3 aspect ratio digital television reference test pattern BT Series Broadcasting service (television) ii Rec. ITU-R BT.1729 Foreword The role of the
MMGD0203 Multimedia Design MMGD0203 MULTIMEDIA DESIGN. Chapter 3 Graphics and Animations
MMGD0203 MULTIMEDIA DESIGN Chapter 3 Graphics and Animations 1 Topics: Definition of Graphics Why use Graphics? Graphics Categories Graphics Qualities File Formats Types of Graphics Graphic File Size Introduction
Parameter values for the HDTV standards for production and international programme exchange
Recommendation ITU-R BT.79-6 (6/215) Parameter values for the HDTV standards for production and international programme exchange BT Series Broadcasting service (television) ii Rec. ITU-R BT.79-6 Foreword
RECOMMENDATION ITU-R BR.469-6. Analogue composite television tape recording
Rec. ITU-R BR.469-6 1 RECOMMENDATION ITU-R BR.469-6 *, ** Analogue composite television tape recording Standards for the international exchange of television programmes on magnetic tape *** The ITU Radiocommunication
SelenioFlex File Application: Editor Workflow. SelenioFlex TM File. Offline Editor
SelenioFlex File Application: Editor Workflow Workflow driven by the offline editor, exporting reference file into watch folder SelenioFlex File accesses referenced files directly from SAN/NAS o Very high
Red Bee Media. Technical file and tape delivery specification for Commercials. Applies to all channels transmitted by Red Bee Media
Red Bee Media Technical file and tape delivery specification for Commercials Applies to all channels transmitted by Red Bee Media Red Bee Media Rev. November 2010 Introduction Commercial copy is delivered
How To Write An Inspire Directive
INSPIRE Infrastructure for Spatial Information in Europe Detailed definitions on the INSPIRE Network Services Title Detailed definitions on the INSPIRE Network Services Creator Date 2005-07-22 Subject
RECOMMENDATION ITU-R BR.469-7 *, ** Analogue composite television tape recording
Rec. ITU-R BR.469-7 1 RECOMMENDATION ITU-R BR.469-7 *, ** Analogue composite television tape recording Standards for the international exchange of television programmes on magnetic tape *** (Question 239/11)
have HD broadcast channels or are broadcasting HD during the primetime hours. Cable
Avid HD Workflows Table of Contents Introduction.... 1 Avid and HD................................................................................. 1 HDV workflow... 3 HD workflow overview.........................................................................
ATSC Digital Television Standard: Part 4 MPEG-2 Video System Characteristics
ATSC Digital Television Standard: Part 4 MPEG-2 Video System Characteristics Document A/53 Part 4:2009, 7 August 2009 Advanced Television Systems Committee, Inc. 1776 K Street, N.W., Suite 200 Washington,
SUBTITLING AND CLOSED CAPTION MANAGEMENT
SUBTITLING AND CLOSED CAPTION MANAGEMENT Corporate North & Latin America Asia & Pacific Other regional offices Headquarters Headquarters Headquarters Available at +32 4 361 7000 +1 947 575 7811 +852 2914
VTrak G1100 Application and Performance Notes
VTrak G1100 Application and Performance Notes Version 1.0 Date: 11/5/2014 Copyright 2014, Promise Technology, Inc. All Rights Reserved Revision History Revision Date Author Description 0.69 10/20/14 Gary
White paper. An explanation of video compression techniques.
White paper An explanation of video compression techniques. Table of contents 1. Introduction to compression techniques 4 2. Standardization organizations 4 3. Two basic standards: JPEG and MPEG 4 4. The
Video Conferencing Glossary of Terms
Video Conferencing Glossary of Terms A Algorithm A step-by-step problem-solving procedure. Transmission of compressed video over a communications network requires sophisticated compression algorithms.
Solutions for Disaster Recovery Using Grass Valley Integrated Playout Systems
APPLICATION NOTE Solutions for Disaster Recovery Using Grass Valley Integrated Playout Systems Alex Lakey and Guido Bozward February 2012 In mission-critical broadcast playout environments, the best practice
Technical Brief. Quadro FX 5600 SDI and Quadro FX 4600 SDI Graphics to SDI Video Output. April 2008 TB-03813-001_v01
Technical Brief Quadro FX 5600 SDI and Quadro FX 4600 SDI Graphics to SDI Video Output April 2008 TB-03813-001_v01 Quadro FX 5600 SDI and Quadro FX 4600 SDI Graphics to SDI Video Output Table of Contents
Audio/Video Synchronization Standards and Solutions A Status Report. Patrick Waddell/Graham Jones/Adam Goldberg
/Video Synchronization Standards and Solutions A Status Report Patrick Waddell/Graham Jones/Adam Goldberg ITU-R BT.1359-1 (1998) Only International Standard on A/V Sync Subjective study with EXPERT viewers
SERIES T: TERMINALS FOR TELEMATIC SERVICES Still-image compression JPEG-1 extensions
International Telecommunication Union ITU-T T.871 TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU (05/2011) SERIES T: TERMINALS FOR TELEMATIC SERVICES Still-image compression JPEG-1 extensions Information
EMC PERSPECTIVE. Understanding Content Management and Digital Asset Management Functionality
EMC PERSPECTIVE Understanding Content Management and Digital Asset Management Functionality Reader ROI Content management systems solutions are generally designed for content aggregators, distributors,
DELIVERING CAPTIONS IN DTV An NCAM DTV Access Brief
NOTICE: The following information is offered by NCAM solely as a general overview of the current status of closed captioning support in digital television. Features and capabilities of related systems
Interplay. Production and Interplay Media Asset Manager. How the addition of Media Asset Management transforms Interplay. www.avid.
Interplay Production and Interplay Media Asset Manager How the addition of Media Asset Management transforms Interplay www.avid.com Summary In January 2010, Avid acquired Blue Order, known for its comprehensive
VThis A PP NOTE PROCESSING P2 MEDIA WITH FLIPFACTORY
A PP NOTE PROCESSING P2 MEDIA WITH FLIPFACTORY Synopsis...2 MXF Codec Features and Capabilities...2 Using P2 Monitors in a Factory...3 Making P2 Media Accessible to FlipFactory...3 Access Options...5 Submitting
PDF Primer PDF. White Paper
White Paper PDF Primer PDF What is PDF and what is it good for? How does PDF manage content? How is a PDF file structured? What are its capabilities? What are its limitations? Version: 1.0 Date: October
Motionlink ABN 21 089 713 057 July 2015
Asset Input Guide Asset Delivery Physical media: All deliveries should be addressed to: Motionlink Pty Ltd Suite 903, 1 Chandos St, St Leonards NSW 2065, Australia t. +612 9439 0133 Please email relevant
Getting ready for high frame rates in Digital Cinema
Barco Getting ready for high frame rates in Digital Cinema Author Tom Bert Product Marketing Manager Barco Page 1 of 7 INTRODUCTION The digitization of cinema projection started around the year 2000. Since
balesio Native Format Optimization Technology (NFO)
balesio AG balesio Native Format Optimization Technology (NFO) White Paper Abstract balesio provides the industry s most advanced technology for unstructured data optimization, providing a fully system-independent
Submission Specifications Version 5.0 June 2015
Submission Specifications Version 5.0 June 2015 Page 2: General Submission Guidelines / Delivery Information Page 3: Technical Specifications Page 4: Additional Requirements Pages 5-6: Closed Captions
INFORMATION FOR DELIVERY MATERIALS OF FUNDED PRODUCTIONS
INFORMATION FOR DELIVERY MATERIALS OF FUNDED PRODUCTIONS Congratulations from the National Film & Sound Archive of Australia (NFSA) on securing funding for your production. You will notice that in your
CERTIFICATE IV IN I.T. (MULTIMEDIA) VIDEO PRODUCTION
CERTIFICATE IV IN I.T. (MULTIMEDIA) VIDEO PRODUCTION Australian College of Information Technology Gold Cost Brisbane AUSTRALIA www.acit.edu.au overview CERTIFICATE IV IN I.T. (MULTIMEDIA) VIDEO PRODUCTION
VIDEO PRODUCTION- COURSE OVERVIEW TECHNOLOGY EDUCATION DEPARTMENT BALDWIN HIGH SCHOOL
VIDEO PRODUCTION- COURSE OVERVIEW TECHNOLOGY EDUCATION DEPARTMENT BALDWIN HIGH SCHOOL VIDEO PRODUCTION COURSE DESCRIPTION This course will allow students to develop professional skills in video, film,
Video-Conferencing System
Video-Conferencing System Evan Broder and C. Christoher Post Introductory Digital Systems Laboratory November 2, 2007 Abstract The goal of this project is to create a video/audio conferencing system. Video
CEA Standard. HDR Static Metadata Extensions CEA-861.3. January 2015
CEA Standard HDR Static Metadata Extensions CEA-861.3 January 2015 NOTICE Consumer Electronics Association (CEA ) Standards, Bulletins and other technical publications are designed to serve the public
Technical specifications
Direction technique Tel. 514 521 2424, ext. 3100 1 Table of contents 1. FORMAT... 3 1.1 CASSETTE... 3 1.1.1 IDENTIFICATION... 3 1.2 FILE... 3 2. VIDEO... 4 2.1 SIGNAL PARAMETERS... 4 2.2 VIDEO LEVELS...
High Definition (HD) Technology and its Impact. on Videoconferencing F770-64
High Definition (HD) Technology and its Impact on Videoconferencing F770-64 www.frost.com Frost & Sullivan takes no responsibility for any incorrect information supplied to us by manufacturers or users.
. ImagePRO. ImagePRO-SDI. ImagePRO-HD. ImagePRO TM. Multi-format image processor line
ImagePRO TM. ImagePRO. ImagePRO-SDI. ImagePRO-HD The Folsom ImagePRO TM is a powerful all-in-one signal processor that accepts a wide range of video input signals and process them into a number of different
VThis App Note USING FLIPFACTORY PLAYBACK SERVICE FOR AVID INTERPLAY TRANSFER ENGINE. App Not e
App Not e USING FLIPFACTORY PLAYBACK SERVICE FOR AVID INTERPLAY TRANSFER ENGINE Synopsis...2 Installing Telestream Playback Service...4 Configuring Telestream Playback Service...11 Configuring Interplay
Power Benefits Using Intel Quick Sync Video H.264 Codec With Sorenson Squeeze
Power Benefits Using Intel Quick Sync Video H.264 Codec With Sorenson Squeeze Whitepaper December 2012 Anita Banerjee Contents Introduction... 3 Sorenson Squeeze... 4 Intel QSV H.264... 5 Power Performance...
QUADRO AND NVS DISPLAY RESOLUTION SUPPORT
QUADRO AND NVS DISPLAY RESOLUTION SUPPORT DA-07089-001_v02 October 2014 Application Note DOCUMENT CHANGE HISTORY DA-07089-001_v02 Version Date Authors Description of Change 01 November 1, 2013 AP, SM Initial
IVR Quick Start Guide. Getting off the ground with IVR
IVR Quick Start Guide Getting off the ground with IVR Copyright and Trademarks Copyright 2007-2010 OpenMarket. All rights reserved. No part of this publication may be reproduced, distributed, electronically
Implementing Closed Captioning for DTV
GRAHAM JONES National Association of Broadcasters Washington, DC ABSTRACT The Federal Communications Commission (FCC) rules impose obligations on broadcasters for captioning of digital television (DTV)
The Ad Hoc Group on television evaluation materials
SPECIAL REPORT FROM THE AD HOC GROUP Mastering and Archiving Uncompressed Digital Video Test Materials By Charles Fenimore This is a report on the status of the SMPTE Ad Hoc Group (AHG) charged with creating
A pragmatic look at higher frame rates in Digital Cinema
Barco A pragmatic look at higher frame rates in Digital Cinema Author Tom Bert Product Marketing Manager Barco Page 1 of 8 INTRODUCTION The digitization of cinema projection started around the year 2000.
STORRE: Stirling Online Research Repository Policy for etheses
STORRE: Stirling Online Research Repository Policy for etheses Contents Content and Collection Policies Definition of Repository Structure Content Guidelines Submission Process Copyright and Licenses Metadata
Switch. The only tool professionals need to play, inspect and correct all their files
Telestream Switch Product Guide Switch The only tool professionals need to play, inspect and correct all their files Ideal for visual file inspection, QC workflows, itunes content delivery, and rewrapping
Case Study: Real-Time Video Quality Monitoring Explored
1566 La Pradera Dr Campbell, CA 95008 www.videoclarity.com 408-379-6952 Case Study: Real-Time Video Quality Monitoring Explored Bill Reckwerdt, CTO Video Clarity, Inc. Version 1.0 A Video Clarity Case
FREE TV AUSTRALIA OPERATIONAL PRACTICE OP- 59 Measurement and Management of Loudness in Soundtracks for Television Broadcasting
Page 1 of 9 1. SCOPE This Operational Practice is recommended by Free TV Australia and refers to the measurement of audio loudness as distinct from audio level. It sets out guidelines for measuring and
Introduction to Digital Video
Introduction to Digital Video Significance of the topic With the increasing accessibility of technology for everyday people, things are starting to get digitalized: digital camera, digital cable, digital
MovieLabs Specification for Enhanced Content Protection Version 1.0
MovieLabs Specification for Enhanced Content Protection Version 1.0 Introduction Digital content distribution technologies are evolving and advancing at a rapid pace. Content creators are using these technologies
Broadcast Automation Without Boundaries
Pebble Beach Systems White Paper Broadcast Automation Without Boundaries By Ian Cockett -Technical Director Pebble Beach Systems Ltd www.pebble.tv Abstract The rate of change within broadcast facilities
Video compression. Contents. Some helpful concepts.
Video compression. Uncompressed video is extremely large, which makes it hard to store and move. In most cases we compress video to manage it. The amount of compression depends on the task at hand. Contents
Managing video content in DAM How digital asset management software can improve your brands use of video assets
1 Managing Video Content in DAM Faster connection speeds and improved hardware have helped to greatly increase the popularity of online video. The result is that video content increasingly accounts for
The Life of a Commercial
The Life of a Commercial Application Specification AS-12 provides solutions to the problems that arise in existing workflows As media companies migrate from the old videotape-based systems to use digital
VPAT Voluntary Product Accessibility Template
Applies to the following products and features: Utelogy U-Control o Includes Configurator, Monitor, and Dashboard Utelogy U-Manage Utelogy U-Help Summary Table Section 1194.21 Software Applications and
Intel Modular Server System MFSYS25
Intel Modular Server System MFSYS25 Intel Modular Server Control OEM Branding December 5, 2007 Enterprise Platforms and Services Division 2 Intel Modular Server System MFSYS25 Intel Modular Server Control
Dual-Link and 3G Overview
There has been some confusion in the industry related to the terminology of Dual-Link and 3G. Below is an overview of the proper terminology as we use it in AJA documentation and also some examples of
Digital Asset Management
Digital Asset Management Managing the complexity of digital assets to support your business Product images, streaming video, sound bites, logos, Flash animations, presentations, web pages these days, digital
COPYRIGHT 2011 COPYRIGHT 2012 AXON DIGITAL DESIGN B.V. ALL RIGHTS RESERVED
Subtitle insertion GEP100 - HEP100 Inserting 3Gb/s, HD, subtitles SD embedded and Teletext domain with the Dolby HSI20 E to module PCM decoder with audio shuffler A A application product note COPYRIGHT
M3039 MPEG 97/ January 1998
INTERNATIONAL ORGANISATION FOR STANDARDISATION ORGANISATION INTERNATIONALE DE NORMALISATION ISO/IEC JTC1/SC29/WG11 CODING OF MOVING PICTURES AND ASSOCIATED AUDIO INFORMATION ISO/IEC JTC1/SC29/WG11 M3039
AATB/ICCBBA Interim Guidance Document. For use of ISBT 128 by North American Tissue Banks
AATB/ICCBBA Interim Guidance Document For use of ISBT 128 by North American Tissue Banks Version 1.2.0 September 2012 Published by: ICCBBA PO Box 11309, San Bernardino, California, USA 1.909.793.6516 [email protected]
Audiovisual Asset Deliverables Standards
Audiovisual Asset Deliverables Standards. Vendor Deliverables: Video Program Deliverables Standard Digital Interactive Presentations Deliverables Standard Photo/Artwork Deliverables Standard Project Archive
How To Write A White Paper On Broadcast Media File Exchange
Broadcast Media File Interchange: Rethinking The Need For A Standard File Format White Paper Introduction A transition has begun in broadcast media facilities away from traditional dedicated broadcast
VPMS - Advanced Media Management
VPMS - Advanced Media Management Media Asset Management for Enterprise Needs As the volume of media data and the need for processing speed increases exponentially, professionals in broadcasting and other
How To Test Video Quality With Real Time Monitor
White Paper Real Time Monitoring Explained Video Clarity, Inc. 1566 La Pradera Dr Campbell, CA 95008 www.videoclarity.com 408-379-6952 Version 1.0 A Video Clarity White Paper page 1 of 7 Real Time Monitor
Field Recorder Workflow Guide
Field Recorder Workflow Guide Version 7.2 Copyright 2006 Digidesign, a division of Avid Technology, Inc. All rights reserved. This guide may not be duplicated in whole or in part without the express written
PRODUCING DV VIDEO WITH PREMIERE & QUICKTIME
This article contains guidelines and advice on producing DV video for the screen using a DV camcorder, Adobe Premiere and QuickTime. PRODUCING DV VIDEO WITH PREMIERE & QUICKTIME PRODUCING DV VIDEO WITH
Extracting and Preparing Metadata to Make Video Files Searchable
Extracting and Preparing Metadata to Make Video Files Searchable Meeting the Unique File Format and Delivery Requirements of Content Aggregators and Distributors Table of Contents Executive Overview...
Quick start guide! Terri Meyer Boake
Film Editing Tutorial Quick start guide! Terri Meyer Boake 1. Preparing yourself and your files: This information is valid for all film editing software: FCP, Premiere (the version of FC being used is
PDF/A the standard for long-term archiving
White Paper PDF/A the standard for long-term archiving What are the advantages of the PDF/A Standard? What is the PDF/A Standard? What do PDF/A-1a, PDF/A-1b, PDF/A2 mean? How is the PDF/A Standard implemented?
Technical Document Release Version 3.0. Product Sheet. MediaStore Manager. Archive manager Application Module
Technical Document Release Version 3.0 Product Sheet MediaStore Manager Archive manager Application Module MAM MediaStore Manager MediaStore is the MAM (Media and Digital Asset Management system) proposed
