AT&T U-verse Enabled Audio- Visual Formats Supported



Similar documents
Video Encoding Best Practices

Using TriCaster with YouTube Live. v

AVP SW v2

IVR Quick Start Guide. Getting off the ground with IVR

Quick start guide! Terri Meyer Boake

SelenioFlex File Application: Editor Workflow. SelenioFlex TM File. Offline Editor

APPLICATION BULLETIN AAC Transport Formats

AT&T Service Developer Guide

Microsoft Smooth Streaming

HTTP Live Streaming Overview

Digital Audio and Video Data

For More Information. Setting Bitrate Control and Keyframe Parameters

VThis SD / HD CONVERSION UP-CONVERTING SD TO HD DOWN-CONVERTING HD TO SD CROSS-CONVERTING HD FORMATS SELECTED SD CONVERSIONS.

AT&T Service Developer Guide

Technical specifications

Creating Content for ipod + itunes

HTTP Live Streaming Overview

Video streaming and playback

Content Encoding Profiles 3.0 Specification

1-MINIMUM REQUIREMENT SPECIFICATIONS FOR DVB-T SET-TOP-BOXES RECEIVERS (STB) FOR SDTV

FPO. MagicInfo Lite Software for Samsung Large Format Displays. Built-in digital signage software that provides an all-in-one display solution

Video Streaming Primer

Datasheet EdgeVision

Dolby Digital Plus in HbbTV

Windows Embedded Compact 7 Multimedia Features 1

Cisco D9887B HDTV Modular Receiver

White paper. H.264 video compression standard. New possibilities within video surveillance.

ATSC Standard: 3D-TV Terrestrial Broadcasting, Part 2 Service Compatible Hybrid Coding Using Real-Time Delivery

NVIDIA GeForce GTX 580 GPU Datasheet

This revision of the User s Guide covers Stream Live Last revised on October 1, 2009.

itunes Video and Audio Asset Guide 5.2

MISB EG Engineering Guideline. 14 May H.264 / AVC Coding and Multiplexing. 1 Scope. 2 References

Introduction to BrightSign, BrightAuthor, and BrightSign Network (BSN)

TECHNICAL OPERATING SPECIFICATIONS

High Definition (HD) Technology and its Impact. on Videoconferencing F770-64

Optimizing BrightSign Video Quality

White Paper DisplayPort 1.2 Technology AMD FirePro V7900 and V5900 Professional Graphics. Table of Contents

Best practices for producing quality digital video files

MXF for Program Contribution, AS-11 AMWA White Paper

Technical Paper. Dolby Digital Plus Audio Coding

APEX SPECIFICATION 0403 DIGITAL CONTENT DELIVERY METHODOLOGY FOR AIRLINE IN-FLIGHT ENTERTAINMENT SYSTEMS VERSION 1.2

Power Benefits Using Intel Quick Sync Video H.264 Codec With Sorenson Squeeze

For Articulation Purpose Only

MPEG-2 Transport vs. Program Stream

Universal Ad Package (UAP)

Solomon Systech Image Processor for Car Entertainment Application

Introduction and Comparison of Common Videoconferencing Audio Protocols I. Digital Audio Principles

Fetch TV My Media Hub Quick Start Guide For USB Devices

News. Ad Specifications July 2016

FAQs. Getting started with the industry s most advanced compression technology. when it counts

Video compression: Performance of available codec software

Super Video Compact Disc. Super Video Compact Disc A Technical Explanation

General Ad Guidelines and Specs

EV-8000S. Features & Technical Specifications. EV-8000S Major Features & Specifications 1

Cisco Digital Media Suite: Cisco Digital Media Player 4310G

PC Free Operation Guide

IPTV Primer. August Media Content Team IRT Workgroup

Polycom Competitive Q1 08

Multimedia Playback & Streaming

Introduction to image coding

HDMI or Component Standalone Capture Device 1080p

Data Storage 3.1. Foundations of Computer Science Cengage Learning

Episode 6 Format Support

Classes of multimedia Applications

Switch. The only tool professionals need to play, inspect and correct all their files

PRODUCING DV VIDEO WITH PREMIERE & QUICKTIME

Higth definition from A to Z.

Asynchronous Transfer Mode: ATM. ATM architecture. ATM: network or link layer? ATM Adaptation Layer (AAL)

Alcatel-Lucent Multiscreen Video Platform RELEASE 2.2

Any Video Converter Professional User Manual 1. Any Video Converter Professional. User Manual

THIS PRODUCT DOES NOT TRANSMIT ANY PERSONALLY IDENTIFIABLE INFORMATION FROM YOUR COMPUTER TO MICROSOFT COMPUTER SYSTEMS WITHOUT YOUR CONSENT.

Digital Asset Requirements for Game Titles

High Dynamic Range Video The Future of TV Viewing Experience

high-quality surround sound at stereo bit-rates

SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS Infrastructure of audiovisual services Transmission multiplexing and synchronization

Information Technology Solutions

HbbTV Forum Nederland Specification for use of HbbTV in the Netherlands

White paper. An explanation of video compression techniques.

MPEG-H Audio System for Broadcasting

Technical Specifications for Advertising

For Digital Signage. DSM80 TM Digital Signage Solution. DSM80 TM Control Software & Media Players

Implementing Closed Captioning for DTV

Data Storage. Chapter 3. Objectives. 3-1 Data Types. Data Inside the Computer. After studying this chapter, students should be able to:

Windows Media Components for QuickTime

New Features for Remote Monitoring & Analysis using the StreamScope (RM-40)

Understanding Network Video Security Systems

USB to VGA Adapter USB2VGAE2. Instruction Manual. USB 2.0 to VGA Multi Monitor External Video Adapter

How To Compare Video Resolution To Video On A Computer Or Tablet Or Ipad Or Ipa Or Ipo Or Ipom Or Iporom Or A Tv Or Ipro Or Ipot Or A Computer (Or A Tv) Or A Webcam Or

Polycom RSS 4000 / RealPresence Capture Server 1.6 and RealPresence Media Manager 6.6

User Manual of Web Client

Videoplaza Creative Specification 12 th of March 2012

Windows Media Components for QuickTime

The Evolution of Video Conferencing: UltimateHD

Audiovisual Asset Deliverables Standards

ATSC Digital Television Standard: Part 4 MPEG-2 Video System Characteristics

SBS ONLINE ADVERTISING GUIDELINES

SCTE San Diego Chapter -Closed Captioning Rules, Regulations and Implementations Overview

From Telephone Nuremberg

Compression Workshop. notes. The Illities. Richard Harrington. With RICHARD HARRINGTON

Online video is changing the face of digital advertising.

Transcription:

AT&T U-verse Enabled Audio- Visual Formats Supported Publication Date: August 25, 2014

Legal Disclaimer This document and the information contained herein (collectively, the "Information") is provided to you (both the individual receiving this document and any legal entity on behalf of which such individual is acting) ("You" and "Your") by AT&T, on behalf of itself and its affiliates ("AT&T") for informational purposes only. AT&T is providing the Information to You because AT&T believes the Information may be useful to You. The Information is provided to You solely on the basis that You will be responsible for making Your own assessments of the Information and are advised to verify all representations, statements and information before using or relying upon any of the Information. Although AT&T has exercised reasonable care in providing the Information to You, AT&T does not warrant the accuracy of the Information and is not responsible for any damages arising from Your use of or reliance upon the Information. You further understand and agree that AT&T in no way represents, and You in no way rely on a belief, that AT&T is providing the Information in accordance with any standard or service (routine, customary or otherwise) related to the consulting, services, hardware or software industries. AT&T DOES NOT WARRANT THAT THE INFORMATION IS ERROR-FREE. AT&T IS PROVIDING THE INFORMATION TO YOU "AS IS" AND "WITH ALL FAULTS." AT&T DOES NOT WARRANT, BY VIRTUE OF THIS DOCUMENT, OR BY ANY COURSE OF PERFORMANCE, COURSE OF DEALING, USAGE OF TRADE OR ANY COLLATERAL DOCUMENT HEREUNDER OR OTHERWISE, AND HEREBY EXPRESSLY DISCLAIMS, ANY REPRESENTATION OR WARRANTY OF ANY KIND WITH RESPECT TO THE INFORMATION, INCLUDING, WITHOUT LIMITATION, ANY REPRESENTATION OR WARRANTY OF DESIGN, PERFORMANCE, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, OR ANY REPRESENTATION OR WARRANTY THAT THE INFORMATION IS APPLICABLE TO OR INTEROPERABLE WITH ANY SYSTEM, DATA, HARDWARE OR SOFTWARE OF ANY KIND. AT&T DISCLAIMS AND IN NO EVENT SHALL BE LIABLE FOR ANY LOSSES OR DAMAGES OF ANY KIND, WHETHER DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL, PUNITIVE, SPECIAL OR EXEMPLARY, INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOSS OF BUSINESS PROFITS, BUSINESS INTERRUPTION, LOSS OF BUSINESS INFORMATION, LOSS OF GOODWILL, COVER, TORTIOUS CONDUCT OR OTHER PECUNIARY LOSS, ARISING OUT OF OR IN ANY WAY RELATED TO THE PROVISION, NON-PROVISION, USE OR NON-USE OF THE INFORMATION, EVEN IF AT&T HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH LOSSES OR DAMAGES. i

Table of Contents Contents 1 Introduction... 1 2 Application Content Formats... 2 2.1 Images... 2 2.1.1 JPEG... 2 2.1.2 PNG... 2 3 Video Streams... 3 3.1 MP4 Solution (with H.264 Video and AAC Audio)... 4 3.1.1 Encoding Requirements... 4 3.1.2 Additional Technical Requirements... 7 3.1.3 Avoiding Critical Failure Points... 8 3.1.4 Hosting and Quality of Service... 9 3.2 Windows Media / VC-1 Solution... 10 3.2.1 Encoding Requirements... 10 3.2.2 Encoding Basics... 11 4 Audio Stream... 13 4.1 Audio Formats... 13 4.2 Audio-only Stream... 13 5 Additional Information... 14 ii

Table of Tables Table 3-1: MP4 Solution Encoding Requirements.... 6 Table 3-2: Windows Media/VC-1 Encoding Requirements... 11 Table 4-1: Supported Sampling Frequencies and Bit Rates.... 13 iii

1 Introduction This document is intended for developers creating U-verse Enabled apps for the ios or Android platforms. The U-verse receiver (SetTopBox) is capable of rendering content in a variety of media formats. Linear ("live") television content and video-on-demand (VOD) content are delivered in multicast mode over reserved streams through the U- verse network, through each subscriber's Residential Gateway, to the requesting receiver. The number of High-Definition and Standard Definition streams available is determined by the service bandwidth. These multicast content streams are reserved for use by U-verse-supplied content, and receive the highest content-handling priority within the network and within the receiver. U-verse applications, including U-verse Enabled applications, can also provide content for the receiver to render. This content is carried on a best-effort basis from the source, over the Wide Area Network (WAN) and/or Local Area Network (LAN) to the receiver. The receiver supports image, video, and audio application content. The following sections provide additional details about the supported formats. Note that formats other than those detailed here may appear to work. However, there is no guarantee that future versions of the U-verse receiver hardware and software will continue to render unsupported content, so the developer is encouraged to follow the recommendations in this document. Before an application is marketed, the developer should test all content types in a U-verse environment to ensure proper operation and rendering. Page 1 of 14

2 Application Content Formats 2.1 Images 2.1.1 JPEG 2.1.2 PNG Only JPEG and PNG image formats are supported. Other common image formats such as GIF are not supported. The preferred format for images is JPEG. The maximum recommended size is 854x480 pixels. The maximum supported size is 1280x720px. The current documentation also lists the following limitations for JPEG images: Color depth supported is 32 bits per pixel (BPP). CMYK color model is not supported. The maximum file size for the content from the server is 2MB. Adobe Photoshop extensions such as printer profiles are not supported. PNG images will render, but use twice the resources of a comparable JPEG image, so they should be used sparingly. Their use should be limited to situations where they provide a unique benefit, such as transparency. If a PNG graphic is used, the following limitations should be observed: PNG Size is limited to 200KB. Color depth supported is 32 BPP. Page 2 of 14

3 Video Streams This specification describes requirements for audio/video content which is accessed from within an application running on AT&T U-verse receivers. There are two different solutions for audio/video content within applications. The first is the MP4-based solution and the second is the Windows Media-based solution. They differ in several important ways: Container Format: The MP4 solution uses the MPEG 4 standard file format. Windows Media is ASF-based. Video Codec: The MP4 solution uses H.264 Main or High Profile. Windows Media uses WMV-9/VC-1 Main or Advanced Profile. Audio Codec: The MP4 solution uses HE-AACv1 or AAC-LC. Windows Media uses WMA-9 or WMA Professional. Use Cases: MP4 is limited to pre-encoded/video-on-demand use cases (content that is not being encoded live). Windows Media allows for live and pre-encoded scenarios. Also, the MP4 solution requires both a video and audio track to be present, whereas Windows Media allows for audio-only encoding (such as music playback. However, the MP4 and Windows Media Solutions both share these characteristics: Content is ultimately encoded to a single bit rate. Using the DVR to record the content to the receiver s local hard drive is not possible. Encoding specification compliance is critical for achieving good playback performance on receiver platforms, which are always computationallyconstrained relative to general PC playback abilities. The combination of application design, hosting approach, and delivery method will substantially impact the user experience for AT&T customers. For accessibility, closed captioning and one or two audio tracks are supported. If the MP4 Solution is used with this document s audio and video profile requirements, then these characteristics also apply: Offers video resolution designed to be similar to the U-verse linear resolution, but at half of the normal TV motion rate (59.94 Hz) used in 480i, 720p, and 1080i content. Page 3 of 14

Allows use of 720p or two types of Standard Definition video. The higher quality of the SD options, the Enhanced Standard Definition tier, allows support for native widescreen 16:9 content. Offers compelling quality at reasonable bit rates: o o o Basic SD content fits within a 1.5 Mbps (megabits per second) rate. Full resolution SD, including native widescreen SD content, fits within a 2.2 Mbps rate. High-Definition (720p) content fits within a 3.3 Mbps rate. Uses the HE-AACv1 audio codec. 3.1 MP4 Solution (with H.264 Video and AAC Audio) 3.1.1 Encoding Requirements Standard Definition (4:3) 480p24 1 or p30 2 Enhanced Standard High Definition (16:9) Definition (4:3 or 16:9) 480p24 1 or p30 2 720p24 1 or p30 2 Overall Bit rate 3 1.25 Mbps 2.20 Mbps 3.30 Mbps Container format Video H.264 complexity profile / level MP4 (non-fragmented) Required: One track Main Profile @ Level 3.0 Main Profile @ Level 3.1 Main Profile @ Level 4.0 Video bit rate 1.05 Mbps (1,050 kbps) 1.9 Mbps (1,900 kbps) 3.0 Mbps (3,000 kbps) Video rate structure CBR Spatial resolution 640 x 480 720 x 480 1280 x 720 1 p24 and 23.98p indicate ~23.976 frames per second for content mastered using a 24 fps workflow. Actual rate is 24,000/1,001. 2 p30 and 29.97p indicate ~29.97 frames per second. This provides half the motion of full 480i, 720p, and 1080i formats, which have a 59.94 Hz motion rate. Actual rate is 30,000/1,001. 3 Bit rate includes video track, one or two audio tracks, closed captioning (if present) and MP4 container overhead. Page 4 of 14

Standard Definition (4:3) 480p24 1 or p30 2 Enhanced Standard High Definition (16:9) Definition (4:3 or 16:9) 480p24 1 or p30 2 720p24 1 or p30 2 Aspect ratio of frame 4:3 (1.33:1) 4:3 (1.33:1) or 16:9 (1.78:1) 16:9 (1.78:1) Pixel aspect ratio 1:1 (square) For 4:3 image: PAR = 10:11 For 16:9 image: PAR = 40:33 1:1 (square) Aspect ratio of content within the frame Full frame (letterboxing prohibited) Full frame (letterboxing and/or pillarboxing prohibited) Full frame (pillarboxing prohibited) Frame rate 23.98 1 or 29.97 2 progressive Entropy mode CABAC Keyframe interval 4 B-frames (consecutive max) Reference frames B-frame utilization as reference frames Slice count Motion Estimation (ME): Search shape ME: Subpixel model Buffer compliance Coded Picture Buffer (CPB) max size 4 seconds (closed GOP) with I-pictures on scene changes (4,200,000 bits or 525,000 bytes) 3 (adaptive placement allowed) 4 frames Enabled 1 slice 8x8 Quarter pixel Hypothetical Reference Decoder 5 (HRD) model 4 seconds (400% of average video bit rate) (7,600,000 bits or 950,000 bytes) (12,000,000 bits or 1,500,000 bytes) 4 Keyframe interval is also referred to as I-frame interval, I-picture interval, or Random Access Point interval. 5 Not all encoding tools have the ability to create HRD-compliant streams. Please be sure to verify this function before selecting a tool for your AT&T U-verse encoding workflow. Page 5 of 14

Standard Definition (4:3) 480p24 1 or p30 2 Enhanced Standard High Definition (16:9) Definition (4:3 or 16:9) 480p24 1 or p30 2 720p24 1 or p30 2 Closed captions Audio Codec Audio rate structure Bit rate for primary track Bit rate for secondary track (if present) Channel count Resolution Sampling rate Audio dialog level Error detection Audio Language Descriptor Encapsulation method (optional) CEA-608 -or- CEA-608 plus CEA-708 If present, must be embedded in video track as Supplemental Enhancement Information (SEI) data (not VBI / Line 21) Minimum: One track Maximum: Two tracks High Efficiency AAC Version 1 (HE-AACv1) CBR 96 kbps 64 kbps Stereo (2 channels per track) 16 bits per sample 48 khz -31 dbfs average (per ATSC A/85) CRC checksums enabled Required, even if only one track is present: ISO 639-2 format ADTS Table 3-1: MP4 Solution Encoding Requirements. Page 6 of 14

3.1.2 Additional Technical Requirements 3.1.2.1 Container Technical Requirements The MP4 container must be encoded and flagged as streaming-ready. This means that the listing of keyframe locations, or moov data element, must be at the beginning of the file before the actual audio/video payload, or mdat data element. Only single bit rate encoding is permitted. Adaptive or multi-bit rate encoding is not currently supported. Only non-fragmented MP4 encoding is permitted. The fragmented MP4 container, used in the Microsoft Smooth Streaming specification, is not currently supported by AT&T for in-application content for receivers. 3.1.2.2 Video Technical Requirements Frame drops during encoding shall not be allowed. Every I-picture must be signaled as an IDR (Instantaneous Decoding Refresh), and the video stream must start with an IDR. The video stream must comply with the official H.264 standard (ISO/IEC 14496-10). Black bars must not be encoded into the actual video output. Native 4:3 sources must be used to create 4:3 encoded output. Native 16:9 sources, with at least 480 scan lines of resolution, must be used to create 16:9 encoded output. 3.1.2.3 Audio Technical Requirements The average audio dialog level should be -31 dbfs. The guidelines for non- Dolby audio codecs that appear in ATSC A/85 (the technical standard which covers CALM Act requirements) should be followed. ISO 639-2 language descriptor is required for each audio track. If using the optional Secondary Audio Program track, its language descriptor must be set to a different value than the primary audio track. This is because audio track selection is currently accomplished using the receiver s global audio language preference, which does not allow for differentiation between multiple audio tracks containing different content, but flagged as the same language. Allowed scenarios include: o o English (primary) + Spanish (secondary) Spanish (primary) + English (secondary) Page 7 of 14

3.1.2.4 Closed Captioning o English (primary) + French (secondary) A prohibited combination would be: o English (primary) + English (secondary) Assets may include CEA-608 and/or CEA-608 plus CEA-708 closed captions. Spoken language of asset must appear on CC1 track for CEA-608 and Service 1 track for CEA-708. Translated languages should appear in CEA-608 as CC2 or CC3, and in CEA-708 as Service 2 and Service 3, etc. Note that U-verse receivers give priority to CEA-608 captions, if both 608 and 708 captions are both present and the customer has enabled both types. Closed caption data must be compliant with the CEA-608-E and/or CEA-708- D specifications. Closed caption data must be transported as specified by ATSC A/53D, Annex F, as SEI messages within the video elementary stream. Please refer to the FCC website for Closed Captioning requirements: http://www.fcc.gov/cgb/consumerfacts/closedcaption.html Please refer to the FCC website for Closed Captioning exemptions: http://www.fcc.gov/cgb/dro/exemptions_from_cc_rules.html 3.1.3 Avoiding Critical Failure Points By paying special attention to these items, the chance of attaining high-quality, compatible output using the MP4 solution is substantially increased: The MP4 container must be encoded as streaming-ready. This means that the listing of keyframe locations, or moov data element, must be at the beginning of the file before the actual video or audio payload, or mdat data element. Audio-only assets are not currently playable on the receiver due to limitations in the receiver software. For audio-only content use cases, WMA-9 encoding into the WMA [ASF] container should be used. Alternately, MP3 encoding may be used as long as incompatibility with the U-verse client on the Microsoft Xbox 360 is acceptable. Black bars should not exist in the source video. That is, only native 4:3 and native 16:9 sources should be used. Page 8 of 14

High Definition source assets should be used when encoding to Enhanced Standard Definition assets. Special pre-processing must be used for interlaced sources which are reencoded as progressive for this specification. For 30p profiles, adaptive deinterlace methods should be used. For 24p encoded output from sources containing 3:2 pulldown, inverse telecine pre-processing should be used. Each audio track must be tagged with a language code adhering to the ISO 639-2 specification. Initiating playback by using a direct URL/URI is required. On the server/host platform, HTTP 1.1 should be enabled to byte-serve requested files. MP4 files should be delivered to the receiver with a MIME type of video/mpeg. 3.1.4 Hosting and Quality of Service This specification allows for content hosted on the public Internet to be requested and delivered to U-verse receivers. It should be noted that content delivered in this manner is subject to the operating conditions of all network elements in between the hosting server and the receiver. This could impact reliable delivery of content in all cases, since public Internet conditions are out of the direct control of AT&T network engineers. All content flowing from the public Internet to U-verse receivers is subject to best-effort QoS. Page 9 of 14

3.2 Windows Media / VC-1 Solution 3.2.1 Encoding Requirements Non-square Pixel Standard Definition Square Pixel Standard Definition Non-square Pixel Enhanced Standard Definition File extension.asf (live) or.wmv (pre-encoded) VIDEO WMV-9/VC-1 codec profile Bit rate and structure Main or Advanced Profile (Advanced Profile required for closed captions) 1.25 Mbps Constant Bit Rate Resolution 480x480 6 640 x 480 720x480 1 Image aspect ratio 4:3 Pixel aspect ratio (PAR) 4:3 1 1:1 (square pixels) 10:11 1 PAR Signaling AspectRatioX + AspectRatioY 7 Frame rate / structure Keyframe interval Buffer size Bi-directionally predicted frames (B-frames) Closed Captioning 29.97 progressive 4 seconds 2 seconds (200% of average video bit rate) (2,500,000 bits or 312,500 bytes) No B-frames permitted Requires Advanced Profile CEA-608 with CC1 or CC1+CC2 present, encoded as ASF data (not as VBI line 21 data) 6 480x480 and 720x480 are supported, as long as the encoder supports the DISPLAY_EXT metadata field and the value is set to True. Check with AT&T Labs for current encoder recommendations. 7 Parameter is optional for 640x480, but required for 480x480 and 720x480. Page 10 of 14

AUDIO Audio + Video Audio Only Track count Min: 1 Max: 2 1 Windows Media Audio codec WMA v9/v9.2 (not WMA Professional) WMA v9/v9.2 or WMA Professional Audio Rate Structure Bit rate (Required) Primary track: 96 kbps (Optional) Secondary track: 64 kbps CBR For hi-fi musical fidelity: WMA Pro @ 96 kbps or WMA v9/9.2 @ 128 kbps Resolution / sampling / channels Audio dialog level 16 bit / 48 khz / 2 channels (stereo) per track -31 dbfs average (per ATSC A/85) 3.2.2 Encoding Basics 3.2.2.1 Container 3.2.2.2 Video 3.2.2.3 Audio Table 3-2: Windows Media/VC-1 Encoding Requirements. Microsoft ASF, which is more commonly recognizable as a.wmv or.wma file. If there is video, it must use the Windows Media Video 9 (aka VC-1) codec, either Main or Advanced Profile. Note that Advanced Profile is required to support closed captions. Video is always standard definition and always 4:3 (or letterboxed within 4:3). Tech tip: The use of the terms WMV-9 and VC-1 is completely interchangeable as long as the container format is ASF and the Profile is the same. The average audio dialog level should be -31 dbfs. The guidelines for non- Dolby audio codecs that appear in ATSC A/85 (the technical standard which covers CALM Act requirements) should be followed. Page 11 of 14

3.2.2.4 Security For assets with both audio and video, the primary audio track must use Windows Media Audio v9 or v9.2. It must not use WMA Professional. Optionally, there can be a second track for Secondary Audio Program use, such as a secondary spoken language or Audio Description (i.e., Descriptive Video for the sight-impaired). Having more than two audio tracks is not permitted. Quality is lower on the second track, because its bit rate is lower than the primary track. Each track is a stereo pair of channels. There is no HTTP/SSL transmission encryption. Page 12 of 14

4 Audio Stream Audio content may be provided as an audio-only stream, or packaged with video content. The following audio formats are supported with video content. Windows Media Audio 9 (WMA9) Dolby Digital AC-3 4.1 Audio Formats Advanced Audio Coding (AAC) MPEG 1 and 2 Audio Layers 1, 2, and MPEG 1 Layer 3 (MP3) The following table shows the supported sampling frequencies and bit rates for each audio format. Format WMA9 WMA9.2 WMA Pro (all stereo) Sampling Frequency (khz) 48 Bit Rate (kb/s) 64, 96, 128, 192 (in addition, WMA Pro supports 256, 384, and 440) DD AC-3 (stereo and 5.1) 48 128, 192, 224, 384, 448 DD Enhanced AC-3 (Stereo and 5.1) 44.1, 48 AAC (stereo) 48 128, 192 HE-AAC (stereo) 48 64, 96 MPEG-1 L2 (stereo) 48 96, 128, 192 MPEG-1 L2 (stereo) 48 96, 128, 192 MPEG2 L1 (stereo) 48 128, 192, 384 MPEG2 L2 (stereo) 48 96, 192, 384 4.2 Audio-only Stream 128, 192, 224, 384, 448 (E-AC-3 stereo also supports 32, 64, 96) Table 4-1: Supported Sampling Frequencies and Bit Rates. The only supported audio-only format is Windows Media Audio. It is possible to stream an AAC audio stream by combining it with a "dummy" video stream in MP4/H.264 format. Note that the "streaming-ready" requirements described above apply to the "dummy" stream. Page 13 of 14

5 Additional Information Certain Closed Captioning (e.g., CEA EIA 708-B) and Digital Video Broadcasting (DVB) text data streams are supported over the MPEG2 Transport stream. Content that includes Digital Rights Management or copy protection presents a special challenge. Some video copy-protection schemes are supported by the receiver, but protected content, such as purchased music and movies, should generally be avoided. Page 14 of 14