Wednesday, August 21, 2019

MPEG news: a report from the 127th meeting, Gothenburg, Sweden

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects. Additionally, this version of the blog post will be also posted at ACM SIGMM Records.

MPEG News Archive

Plenary of the 127th MPEG Meeting in Gothenburg, Sweden.
The 126th MPEG meeting concluded on March 29, 2019 in Geneva, Switzerland with the following topics:
  • Versatile Video Coding (VVC) enters formal approval stage, experts predict 35-60% improvement over HEVC
  • Essential Video Coding (EVC) promoted to Committee Draft
  • Common Media Application Format (CMAF) 2nd edition promoted to Final Draft International Standard
  • Dynamic Adaptive Streaming over HTTP (DASH) 4th edition promoted to Final Draft International Standard
  • Carriage of Point Cloud Data Progresses to Committee Draft
  • JPEG XS carriage in MPEG-2 TS promoted to Final Draft Amendment of ISO/IEC 13818-1 7th edition
  • Genomic information representation – WG11 issues a joint call for proposals on genomic annotations in conjunction with ISO TC 276/WG 5
  • ISO/IEC 23005 (MPEG-V) 4th Edition – WG11 promotes the Fourth edition of two parts of “Media Context and Control” to the Final Draft International Standard (FDIS) stage

The corresponding press release of the 127th MPEG meeting can be found here: https://mpeg.chiariglione.org/meetings/127

Versatile Video Coding (VVC)

The Moving Picture Experts Group (MPEG) is pleased to announce that Versatile Video Coding (VVC) progresses to Committee Draft, experts predict 35-60% improvement over HEVC.

The development of the next major generation of video coding standard has achieved excellent progress, such that MPEG has approved the Committee Draft (CD, i.e., the text for formal balloting in the ISO/IEC approval process).

The new VVC standard will be applicable to a very broad range of applications and it will also provide additional functionalities. VVC will provide a substantial improvement in coding efficiency relative to existing standards. The improvement in coding efficiency is expected to be quite substantial – e.g., in the range of 35–60% bit rate reduction relative to HEVC although it has not yet been formally measured. Relative to HEVC means for equivalent subjective video quality at picture resolutions such as 1080p HD or 4K or 8K UHD, either for standard dynamic range video or high dynamic range and wide color gamut content for levels of quality appropriate for use in consumer distribution services. The focus during the development of the standard has primarily been on 10-bit 4:2:0 content, and 4:4:4 chroma format will also be supported.

The VVC standard is being developed in the Joint Video Experts Team (JVET), a group established jointly by MPEG and the Video Coding Experts Group (VCEG) of ITU-T Study Group 16. In addition to a text specification, the project also includes the development of reference software, a conformance testing suite, and a new standard ISO/IEC 23002-7 specifying supplemental enhancement information messages for coded video bitstreams. The approval process for ISO/IEC 23002-7 has also begun, with the issuance of a CD consideration ballot.

Research aspects: VVC represents the next generation video codec to be deployed in 2020+ and basically the same research aspects apply as for previous generations, i.e., coding efficiency, performance/complexity, and objective/subjective evaluation. Luckily, JVET documents are freely available including the actual standard (committee draft), software (and its description), and common test conditions. Thus, researcher utilizing these resources are able to conduct reproducible research when contributing their findings and code improvements back to the community at large. 

Essential Video Coding (EVC)

MPEG-5 Essential Video Coding (EVC) promoted to Committee Draft

Interestingly, at the same meeting as VVC, MPEG promoted MPEG-5 Essential Video Coding (EVC) to Committee Draft (CD). The goal of MPEG-5 EVC is to provide a standardized video coding solution to address business needs in some use cases, such as video streaming, where existing ISO video coding standards have not been as widely adopted as might be expected from their purely technical characteristics.

The MPEG-5 EVC standards includes a baseline profile that contains only technologies that are over 20 years old or are otherwise expected to be royalty-free. Additionally, a main profile adds a small number of additional tools, each providing significant performance gain. All main profile tools are capable of being individually switched off or individually switched over to a corresponding baseline tool. Organizations making proposals for the main profile have agreed to publish applicable licensing terms within two years of FDIS stage, either individually or as part of a patent pool.

Research aspects: Similar research aspects can be described for EVC and from a software engineering perspective it could be also interesting to further investigate this switching mechanism of individual tools or/and fall back option to baseline tools. Naturally, a comparison with next generation codecs such as VVC is interesting per se. The licensing aspects itself are probably interesting for other disciplines but that is another story...

Common Media Application Format (CMAF)

MPEG ratified the 2nd edition of the Common Media Application Format (CMAF)

The Common Media Application Format (CMAF) enables efficient encoding, storage, and delivery of digital media content (incl. audio, video, subtitles among others), which is key to scaling operations to support the rapid growth of video streaming over the internet. The CMAF standard is the result of widespread industry adoption of an application of MPEG technologies for adaptive video streaming over the Internet, and widespread industry participation in the MPEG process to standardize best practices within CMAF.

The 2nd edition of CMAF adds support for a number of specifications that were a result of significant industry interest. Those include
  • Advanced Audio Coding (AAC) multi-channel;
  • MPEG-H 3D Audio;
  • MPEG-D Unified Speech and Audio Coding (USAC);
  • Scalable High Efficiency Video Coding (SHVC);
  • IMSC 1.1 (Timed Text Markup Language Profiles for Internet Media Subtitles and Captions); and
  • additional HEVC video CMAF profiles and brands.
This edition also introduces CMAF supplemental data handling as well as new structural brands for CMAF that reflects the common practice of the significant deployment of CMAF in industry. Companies adopting CMAF technology will find the specifications introduced in the 2nd Edition particularly useful for further adoption and proliferation of CMAF in the market.

Research aspects: see below (DASH).

Dynamic Adaptive Streaming over HTTP (DASH)

MPEG approves the 4th edition of Dynamic Adaptive Streaming over HTTP (DASH)

The 4th edition of MPEG-DASH comprises the following features:
service description that is intended by the service provider on how the service is expected to be consumed;
  • a method to indicate the times corresponding to the production of associated media;
  • a mechanism to signal DASH profiles and features, employed codec and format profiles; and
  • supported protection schemes present in the Media Presentation Description (MPD).
It is expected that this edition will be published later this year. 

Research aspects: CMAF 2nd and DASH 4th edition come along with a rich feature set enabling a plethora of use cases. The underlying principles are still the same and research issues arise from updated application and service requirements with respect to content complexity, time aspects (mainly delay/latency), and quality of experience (QoE). The DASH-IF awards the excellence in DASH award at the ACM Multimedia Systems conference and an overview about its academic efforts can be found here. For example, see here our recent research on bandwidth prediction in low-latency chunked streaming. Additionally, our tutorial at ACM Multimedia 2019 about a journey towards fully immersive media access reviews state of the art in this area and how it could be extended enabling 6DoF HAS services through point cloud compression.

Carriage of Point Cloud Data

MPEG progresses the Carriage of Point Cloud Data to Committee Draft

At its 127th meeting, MPEG has promoted the carriage of point cloud data to the Committee Draft stage, the first milestone of ISO standard development process. This standard is the first one introducing the support of volumetric media in the industry-famous ISO base media file format family of standards.

This standard supports the carriage of point cloud data comprising individually encoded video bitstreams within multiple file format tracks in order to support the intrinsic nature of the video-based point cloud compression (V-PCC). Additionally, it also allows the carriage of point cloud data in one file format track for applications requiring multiplexed content (i.e., the video bitstream of multiple components is interleaved into one bitstream).

This standard is expected to support efficient access and delivery of some portions of a point cloud object considering that in many cases that entire point cloud object may not be visible by the user depending on the viewing direction or location of the point cloud object relative to other objects. It is currently expected that the standard will reach its final milestone by the end of 2020.

Research aspects: MPEG's Point Cloud Compression (PCC) comes in two flavors, video- and geometric-based but still requires to be packaged into file and delivery formats. MPEG's choice here is the ISO base media file format and the efficient carriage of point cloud data is characterized by both functionality (i.e., enabling the required used cases) and performance (such as low overhead).

MPEG 2 Systems/Transport Stream

JPEG XS carriage in MPEG-2 TS promoted to Final Draft Amendment of ISO/IEC 13818-1 7th edition

At its 127th meeting, WG11 (MPEG) has extended ISO/IEC 13818-1 (MPEG-2 Systems) – in collaboration with WG1 (JPEG) – to support ISO/IEC 21122 (JPEG XS) in order to support industries using still image compression technologies for broadcasting infrastructures. The specification defines a JPEG XS elementary stream header and specifies how the JPEG XS video access unit (specified in ISO/IEC 21122-1) is put into a Packetized Elementary Stream (PES). Additionally, the specification also defines how the System Target Decoder (STD) model can be extended to support JPEG XS video elementary streams.

Genomic information representation

WG11 issues a joint call for proposals on genomic annotations in conjunction with ISO TC 276/WG 5

The introduction of high-throughput DNA sequencing has led to the generation of large quantities of genomic sequencing data that have to be stored, transferred and analyzed. So far WG 11 (MPEG) and ISO TC 276/WG 5 have addressed the representation, compression and transport of genome sequencing data by developing the ISO/IEC 23092 standard series also known as MPEG-G. They provide a file and transport format, compression technology, metadata specifications, protection support, and standard APIs for the access of sequencing data in the native compressed format.

An important element in the effective usage of sequencing data is the association of the data with the results of the analysis and annotations that are generated by processing pipelines and analysts. At the moment such association happens as a separate step, standard and effective ways of linking data and meta information derived from sequencing data are not available.

At its 127th meeting, MPEG and ISO TC 276/WG 5 issued a joint Call for Proposals (CfP) addressing the solution of such problem. The call seeks submissions of technologies that can provide efficient representation and compression solutions for the processing of genomic annotation data.

Companies and organizations are invited to submit proposals in response to this call. Responses are expected to be submitted by the 8th January 2020 and will be evaluated during the 129th WG 11 (MPEG) meeting. Detailed information, including how to respond to the call for proposals, the requirements that have to be considered, and the test data to be used, is reported in the documents N18648, N18647, and N18649 available at the 127th meeting website (http://mpeg.chiariglione.org/meetings/127). For any further question about the call, test conditions, required software or test sequences please contact: Joern Ostermann, MPEG Requirements Group Chair (ostermann@tnt.uni-hannover.de) or Martin Golebiewski, Convenor ISO TC 276/WG 5 (martin.golebiewski@h-its.org).

ISO/IEC 23005 (MPEG-V) 4th Edition

WG11 promotes the Fourth edition of two parts of “Media Context and Control” to the Final Draft International Standard (FDIS) stage

At its 127th meeting, WG11 (MPEG) promoted the 4th edition of two parts of ISO/IEC 23005 (MPEG-V; Media Context and Control) standards to the Final Draft International Standard (FDIS). The new edition of ISO/IEC 23005-1 (architecture) enables ten new use cases, which can be grouped into four categories: 3D printing, olfactory information in virtual worlds, virtual panoramic vision in car, and adaptive sound handling. The new edition of ISO/IEC 23005-7 (conformance and reference software) is updated to reflect the changes made by the introduction of new tools defined in other parts of ISO/IEC 23005. More information on MPEG-V and its parts 1-7 can be found at https://mpeg.chiariglione.org/standards/mpeg-v.

Finally, the unofficial highlight of the 127th MPEG meeting we certainly found while scanning the scene in Gothenburg on Tuesday night...





Monday, August 12, 2019

ACMMM'19: Towards 6DoF HTTP Adaptive Streaming Through Point Cloud Compression

Towards 6DoF HTTP Adaptive Streaming Through Point Cloud Compression


[PDF] (coming soon; slides to be provided later)

Jeroen van der Hooft, Tim Wauters, Filip De Turck (Ghent University - imec), Christian Timmerer, and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt)

Abstract: The increasing popularity of head-mounted devices and 360° video cameras allows content providers to offer virtual reality video streaming over the Internet, using a relevant representation of the immersive content combined with traditional streaming techniques. While this approach allows the user to freely move her head, her location is fixed by the camera’s position within the scene. Recently, an increased interest has been shown for free movement within immersive scenes, referred to as six degrees of freedom. One way to realize this is by capturing objects through a number of cameras positioned in different angles, and creating a point cloud which consists of the location and RGB color of a significant number of points in the three-dimensional space. Although the concept of point clouds has been around for over two decades, it recently received increased attention by ISO/IEC MPEG, issuing a call for proposals for point cloud compression. As a result, dynamic point cloud objects can now be compressed to bit rates in the order of 3 to 55 Mb/s, allowing feasible delivery over today’s mobile networks. In this paper, we propose PCC-DASH, a standards-compliant means for HTTP adaptive streaming of scenes comprising multiple, dynamic point cloud objects. We present a number of rate adaptation heuristics which use information on the user’s position and focus, the available bandwidth, and the client’s buffer status to decide upon the most appropriate quality representation of each object. Through an extensive evaluation, we discuss the advantages and drawbacks of each solution. We argue that the optimal solution depends on the considered scene and camera path, which opens interesting possibilities for future work.

Keywords: HTTP adaptive streaming, MPEG-DASH, immersive video, point clouds, MPEG V-PCC, rate adaptation

Tuesday, July 30, 2019

ACM MMSys 2020 Research Track - Call for Papers

ACM MMSys 2020 Research Track - Call for Papers
June 8-11, 2020, Istanbul, Turkey

The ACM Multimedia Systems Conference (MMSys) provides a forum for researchers to present and share their latest research findings in multimedia systems. While research about specific aspects of multimedia systems are regularly published in the various proceedings and transactions of the networking, operating systems, real-time systems, databases, mobile computing, distributed systems, computer vision, and middleware communities, MMSys aims to cut across these domains in the context of multimedia data types. This provides a unique opportunity to investigate the intersections and the interplay of the various approaches and solutions developed across these domains to deal with multimedia data types.

MMSys is a venue for researchers who explore:
  • Complete multimedia systems that provide a new kind of multimedia experience or system whose overall performance improves the state-of-the-art through new research results in more than one component, or
  • Enhancements to one or more system components that provide a documented improvement over the state-of-the-art for handling continuous media or time-dependent services.
Such individual system components include:
  • Operating systems
  • Distributed architectures and protocols
  • Domain languages, development tools and abstraction layers
  • Using new architectures or computing resources for multimedia
  • New or improved I/O architectures or I/O devices, innovative uses, and algorithms for their operation
  • Representation of continuous or time-dependent media
  • Metrics and measurement tools to assess performance
This touches aspects of many hot topics including but not limited to: content preparation and (adaptive) delivery systems, High Dynamic Range (HDR), games, virtual/augmented/mixed reality, 3D video, immersive systems, plenoptics, 360-degree video, volumetric video delivery, multimedia Internet of Things (IoT), multi and many-core, GPGPUs, mobile multimedia and 5G, wearable multimedia, peer-to-peer (P2P), cloud-based multimedia, cyber-physical systems, multi-sensory experiences, smart cities, Quality of Experience (QoE).

We encourage submissions in the following focus areas
  • Machine learning and statistical modeling for video streaming
  • Volumetric media: from capture to consumption
  • Fake media and tools for preventing illegal broadcasts
Refer to the Web site for more info.

Important Dates
  • Submission deadline: January 10, 2020 (firm deadline)
  • Acceptance notification: March 16, 2020
  • Camera-ready deadline: April 17, 2020
  • Online submission: https://mmsys2020.hotcrp.com/
  • Submission format: 6-12 pages, using ACM style format (double-blind)
  • Reproducibility: Obtain an ACM reproducibility badge by making datasets and code available (Authors will be contacted to make their artifacts available after paper acceptance)
General Chairs
TPC Chairs
Submission Information
  • Papers should be between 6-12 pages long (in PDF format) prepared in the ACM style and written in English. MMSys papers enable authors to present entire multimedia systems or research work that builds on considerable amounts of earlier work in a self-contained manner. MMSys papers are published in the ACM Digital Library. The papers are double-blind reviewed.
  • All submissions will be peer-reviewed by at least three TPC members. All papers will be evaluated for their scientific quality. Authors will have a chance to submit their rebuttals before online discussions among the TPC members.
ACM SIGMM has a tradition of publishing open datasets (MMSys) and open source projects (ACM Multimedia). MMSys 2020 will continue to support scientific reproducibility, by implementing the ACM reproducibility badge system. All accepted papers will be contacted by the Reproducibility Chair, inviting the authors to make their dataset and code available, and thus, obtaining an ACM badge (visible at the ACM DL). The additional material will be published as Appendixes, with no effect on the final page count for papers.

Saturday, July 6, 2019

DASH-IF sponsored ice cream social event at ACM MMSys 2019

The DASH-IF sponsored the ice cream social event at ACM MMSys 2019 which allowed for networking and discussions. Here are some impressions from this unique event...






Friday, July 5, 2019

Workshop on Coding Technologies for Immersive Audio/Visual Experiences

This is an invitation to attend the Workshop on Coding Technologies for Immersive Audio/Visual Experiences. It is open not limited to MPEG members but publicly accessible with free-of-charge preregistration. Non-MPEG member may send your registration information to Lu Yu  and Silke Kenzler  with your name, affiliation and email address.

This workshop will cover the MPEG-I immersive Audio and Visual activities - past, present and future, demonstrate systems and technologies and discuss future requirements on standardization to provide audio/visual immersive experiences.

Time/Date: 13:00-18:00, 10 July, 2019

Address: 
Room: Drottingporten 3
Clarion Post Hotel
Drottningtorget 10
411 03 Gothenburg, Sweden

Programm: 




1300-1315
Introduction 
(Lu Yu, Zhejiang University)
1315-1345
Usecases and challenges about user immersive experiences
(Valerie Allie, InterDigital)
1345-1415
Overview of technologies for immersive visual experiences: capture, processing, compression, standardization and display
(Marek Domanski, Poznan University of Technology)
1415-1445
MPEG-I Immersive Audio
(Schuyler Quackenbush, Audio Research Labs)
1445-1455
Brief introduction about demos: 
    Integral photography display (NHK)
    Realtime interactive demo with 3DoF+ content (InterDigital)
    Plenoptic 2.0 video camera (Tsinghua University)
    A simple free-viewpoint television system (Poznan University of Technology)
1455-1530
Demos 
Coffee break
1530-1600
360° and 3DoF+ video
(Bart Kroon, Philips)
1600-1630
Point cloud compression
(Marius Preda, Telecom SudParis, CNRS Samovar)
1630-1700
How can we achieve 6DoF video compression?
(Joel Jung, Orange)
1700-1730
How can we achieve lenslet video compression? 
(Xin Jin, Tsinghua University, 
Mehrdad Teratani, Nagoya University)
1730-1800
Discussion

Thursday, July 4, 2019

ACMMM'19 Tutorial: A Journey towards Fully Immersive Media Access

ACM Multimedia 2019
October 21-25, 2019, Nice, France

Date/time: Monday, Oct 21, 2019, afternoon

Lecturers

Christian Timmerer, Alpen-Adria-Universität Klagenfurt & Bitmovin, Inc.
Ali C. Begen, Ozyegin University and Networked Media

Abstract

Universal media access as proposed in the late 90s, early 2000 is now reality. Thus, we can generate, distribute, share, and consume any media content, anywhere, anytime, and with/on any device. A major technical breakthrough was the adaptive streaming over HTTP resulting in the standardization of MPEG-DASH, which is now successfully deployed in HTML5 environments thanks to corresponding media source extensions (MSE). The next big thing in adaptive media streaming is virtual reality applications and, specifically, omnidirectional (360°) media streaming, which is currently built on top of the existing adaptive streaming ecosystems. This tutorial provides a detailed overview of adaptive streaming of both traditional and omnidirectional media within HTML5 environments. The tutorial focuses on the basic principles and paradigms for adaptive streaming – both traditional and omnidirectional media – as well as on already deployed content generation, distribution, and consumption workflows. Additionally, the tutorial provides insights into standards and emerging technologies in the adaptive streaming space. Finally, the tutorial includes the latest approaches for immersive media streaming enabling 6DoF DASH through Point Cloud Compression (PCC) and concludes with open research issues and industry efforts in this domain.

Keywords: Omnidirectional media, HTTP adaptive streaming, over-the-top video, 360 video, virtual reality, immersive media access.

Learning Objectives

This tutorial consists of two main parts. In the first part, we provide a detailed overview of the HTML5 standard and show how it can be used for adaptive streaming deployments. In particular, we focus on the HTML5 video, media extensions, and multi-bitrate encoding, encapsulation and encryption workflows, and survey well-established streaming solutions. Furthermore, we present experiences from the existing deployments and the relevant de jure and de facto standards (DASH, HLS, CMAF) in this space. In the second part, we focus on omnidirectional (360) media from creation to consumption as well as first thoughts on dynamic adaptive point cloud streaming. We survey means for the acquisition, projection, coding and packaging of omnidirectional media as well as delivery, decoding and rendering methods. Emerging standards and industry practices are covered as well (OMAF, VR-IF). Both parts present some of the current research trends, open issues that need further exploration and investigation, and various efforts that are underway in the streaming industry.
Upon attending this tutorial, the participants will have an overview and understanding of the following topics:
  • Principles of HTTP adaptive streaming for the Web/HTML5
  • Principles of omnidirectional (360-degree) media delivery
  • Content generation, distribution and consumption workflows for traditional and omnidirectional media
  • Standards and emerging technologies in the adaptive streaming space
  • Current and future research on traditional and omnidirectional media delivery, specifically enabling 6DoF adaptive streaming through point cloud compression
ACM Multimedia attracts attendees that are quite knowledgeable in specific areas. However, not all are experts across multiple disciplines (such as the subject matter here) and only few are familiar with what is happening in the field and standards. Thus, we believe the proposed tutorial will be of interest to this year’s attendees as much as it did in the past.

Table of Contents

Part I: The HTML5 Standard and Adaptive Streaming
  • HTML5 video and media extensions
  • Survey of well-established streaming solutions
  • Multi-bitrate encoding, and encapsulation and encryption workflows
  • The MPEG-DASH standard, Apple HLS and the developing CMAF standard
  • Common issues in scaling and improving quality, multi-screen/hybrid delivery
Part II: Omnidirectional (360-degree) Media
  • Acquisition, projection, coding and packaging of 360-degree video
  • Delivery, decoding and rendering methods
  • The developing MPEG-OMAF and MPEG-I standards
  • Ongoing industry efforts, specifically towards 6DoF adaptive streaming

Speakers

Christian Timmerer received his M.Sc. (Dipl.-Ing.) in January 2003 and his Ph.D. (Dr.techn.) in June 2006 (for research on the adaptation of scalable multimedia content in streaming and constraint environments) both from the Alpen-Adria-Universität (AAU) Klagenfurt. He joined the AAU in 1999 (as a system administrator) and is currently an Associate Professor at the Institute of Information Technology (ITEC) within the Multimedia Communication Group. His research interests include immersive multimedia communication, streaming, adaptation, Quality of Experience, and Sensory Experience. He was the general chair of WIAMIS 2008, QoMEX 2013, MMSys 2016, and PV 2018 and has participated in several EC-funded projects, notably DANAE, ENTHRONE, P2P-Next, ALICANTE, SocialSensor, COST IC1003 QUALINET, and ICoSOLE. He also participated in ISO/MPEG work for several years, notably in the area of MPEG- 21, MPEG-M, MPEG-V, and MPEG-DASH where he also served as standard editor. In 2013, he cofounded Bitmovin (http://www.bitmovin.com/) to provide professional services around MPEG-DASH where he holds the position of the Chief Innovation Officer (CIO) – Head of Research and Standardization. He is a senior member of IEEE and member of ACM, specifically IEEE Computer Society, IEEE Communications Society, and ACM SIGMM. Dr. Timmerer was a guest editor of three special issues for the IEEE Journal on Selected Areas in Communications (JSAC) and currently serves as associate editor for IEEE Transactions on Multimedia. Further information available at http://blog.timmerer.com.

Ali C. Begen is the co-founder of Networked Media, a technology company that offers consulting services to industrial, legal and academic institutions in the IP video space. He has been a research and development engineer since 2001, and has broad experience in mathematical modeling, performance analysis, optimization, standards development, intellectual property and innovation. Between 2007 and 2015, he was with the Video and Content Platforms Research and Advanced Development Group at Cisco, where he designed and developed algorithms, protocols, products and solutions in the service provider and enterprise video domains. Currently, he is also affiliated with Ozyegin University, where he is teaching and advising students in the computer science department. Ali has a PhD in electrical and computer engineering from Georgia Tech. To date, he received a number of academic and industry awards, and was granted 30+ US patents. He held editorial positions in leading magazines and journals, and served in the organizing committee of several international conferences and workshops in the field. He is a senior member of both the IEEE and ACM. In 2016, he was elected distinguished lecturer by the IEEE Communications Society, and in 2018, he was re-elected for another two-year term. More details are at http://ali.begen.net.

Monday, July 1, 2019

DASH-IF Interoperability Documents for Community Review

Community Review documents are published on the DASH-IF website in order to get feedback from the industry on tools and features that are documented for improved interoperability. For each of the documents, comments may be submitted on the technologies itself, on specific features, etc. These documents are only published temporarily for community review and will be replaced by a full version after the commenting period has closed and the comments have been addressed.

LOW-LATENCY DASH

The change request against IOP v4.3 for Community Review is accessible here. This change provides a new clause for live services that addresses specification updates as well as implementation guidelines to support Low-Latency DASH services addressing the requirements above. Community review is open until July 31st, 2019. Addition to IOP is expected by Q3/2019. Comments may be submitted through the github or public bugtracker.

LIVE MEDIA INGEST

The new draft specification is accessible here and a pdf version. This document specifies protocol interfaces for live ingest/egress of media content. It can be used between live ABR encoders, streaming origins, packagers and content delivery networks. It features support for redundant workflows with failover support and timed metadata. Community review is open until July 31st, 2019. Publication is expected by Q3/2019. Comments may be submitted through the github or public bugtracker.