Sunday, November 22, 2020

MPEG news: a report from the 132nd meeting (virtual)

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects. Additionally, this version of the blog post will be also posted at ACM SIGMM Records.

The 132nd MPEG meeting was the first meeting with the new structure. That is, ISO/IEC JTC 1/SC 29/WG 11 -- the official name of MPEG under the ISO structure -- was disbanded after the 131st MPEG meeting and some of the subgroups of WG 11 (MPEG) have been elevated to independent MPEG Working Groups (WGs) and Advisory Groups (AGs) of SC 29 rather than subgroups of the former WG 11. Thus, the MPEG community is now an affiliated group of WGs and AGs that will continue meeting together according to previous MPEG meeting practices and will further advance the standardization activities of the MPEG work program.

The 132nd MPEG meeting was the first meeting with the new structure as follows (incl. Convenors and position within WG 11 structure):

  • AG 2 MPEG Technical Coordination (Convenor: Prof. Jörn Ostermann; for overall MPEG work coordination and prev. known as the MPEG chairs meeting; it’s expected that one can also provide inputs to this AG without being a member of this AG)
  • WG 2 MPEG Technical Requirements (Convenor Dr. Igor Curcio; former Requirements subgroup)
  • WG 3 MPEG Systems (Convenor: Dr. Youngkwon Lim; former Systems subgroup)
  • WG 4 MPEG Video Coding (Convenor: Prof. Lu Yu; former Video subgroup)
  • WG 5 MPEG Joint Video Coding Team(s) with ITU-T SG 16 (Convenor: Prof. Jens-Rainer Ohm; former JVET)
  • WG 6 MPEG Audio Coding (Convenor: Dr. Schuyler Quackenbush; former Audio subgroup)
  • WG 7 MPEG Coding of 3D Graphics (Convenor: Prof. Marius Preda, former 3DG subgroup)
  • WG 8 MPEG Genome Coding (Convenor: Prof. Marco Mattaveli; newly established WG)
  • AG 3 MPEG Liaison and Communication (Convenor: Prof. Kyuheon Kim; (former Communications subgroup)
  • AG 5 MPEG Visual Quality Assessment (Convenor: Prof. Mathias Wien; former Test subgroup).

The 132nd MPEG meeting was held as an online meeting and more than 300 participants continued to work efficiently on standards for the future needs of the industry. As a group, MPEG started to explore new application areas that will benefit from standardized compression technology in the future. A new web site has been created and can be found at http://mpeg.org/.

The official press release can be found here and comprises the following items:

  • Versatile Video Coding (VVC) Ultra-HD Verification Test Completed and Conformance and Reference Software Standards Reach their First Milestone
  • MPEG Completes Geometry-based Point Cloud Compression (G-PCC) Standard
  • MPEG Evaluates Extensions and Improvements to MPEG-G and Announces a Call for Evidence on New Advanced Genomics Features and Technologies
  • MPEG Issues Draft Call for Proposals on the Coded Representation of Haptics
  • MPEG Evaluates Responses to MPEG IPR Smart Contracts CfP
  • MPEG Completes Standard on Harmonization of DASH and CMAF
  • MPEG Completes 2nd Edition of the Omnidirectional Media Format (OMAF)
  • MPEG Completes the Low Complexity Enhancement Video Coding (LCEVC) Standard

In this report, I’d like to focus on VVC, G-PCC, DASH/CMAF, OMAF, and LCEVC.

Versatile Video Coding (VVC) Ultra-HD Verification Test Completed and Conformance & Reference Software Standards Reach their First Milestone

MPEG completed a verification testing assessment of the recently ratified Versatile Video Coding (VVC) standard for ultra-high definition (UHD) content with standard dynamic range, as may be used in newer streaming and broadcast television applications. The verification test was performed using rigorous subjective quality assessment methods and showed that VVC provides a compelling gain over its predecessor -- the High Efficiency Video Coding (HEVC) standard produced in 2013. In particular, the verification test was performed using the VVC reference software implementation (VTM) and the recently released open-source encoder implementation of VVC (VVenC):
  • Using its reference software implementation (VTM), VVC showed bit rate savings of roughly 45% over HEVC for comparable subjective video quality.
  • Using VVenC, additional bit rate savings of more than 10% relative to VTM were observed, which at the same time runs significantly faster than the reference software implementation.
Additionally, the standardization work for both conformance testing and reference software for the VVC standard reached its first major milestone, i.e., progressing to the Committee Draft ballot in the ISO/IEC approval process. The conformance testing standard (ISO/IEC 23090-15) will ensure interoperability among the diverse applications that use the VVC standard, and the reference software standard (ISO/IEC 23090-16) will provide an illustration of the capabilities of VVC and a valuable example showing how the standard can be implemented. The reference software will further facilitate the adoption of the standard by being available for use as the basis of product implementations.
Research aspects: as for every new video codec, its compression efficiency and computational complexity are important performance metrics. While the reference software (VTM) provides a valid reference in terms of compression efficiency it is not optimized for runtime. VVenC seems to provide already a significant improvement and with x266 another open source implementation will be available soon. Together with AOMedia's AV1 (including its possible successor AV2) we are looking forward to a lively future in the area of video codecs.

MPEG Completes Geometry-based Point Cloud Compression Standard

MPEG promoted its ISO/IEC 23090-9 Geometry-based Point Cloud Compression (G-PCC) standard to the Final Draft International Standard (FDIS) stage. G-PCC addresses lossless and lossy coding of time-varying 3D point clouds with associated attributes such as color and material properties. This technology is particularly suitable for sparse point clouds. ISO/IEC 23090-5 Video-based Point Cloud Compression (V-PCC), which reached the FDIS stage in July 2020, addresses the same problem but for dense point clouds, by projecting the (typically dense) 3D point clouds onto planes, and then processing the resulting sequences of 2D images using video compression techniques. The generalized approach of G-PCC, where the 3D geometry is directly coded to exploit any redundancy in the point cloud itself, is complementary to V-PCC and particularly useful for sparse point clouds representing large environments.

Point clouds are typically represented by extremely large amounts of data, which is a significant barrier to mass-market applications. However, the relative ease of capturing and rendering spatial information compared to other volumetric video representations makes point clouds increasingly popular for displaying immersive volumetric data. The current draft reference software implementation of a lossless, intra-frame G‐PCC encoder provides a compression ratio of up to 10:1 and lossy coding of acceptable quality for a variety of applications with a ratio of up to 35:1.

By providing high immersion at currently available bit rates, the G‐PCC standard will enable various applications such as 3D mapping, indoor navigation, autonomous driving, advanced augmented reality (AR) with environmental mapping, and cultural heritage.
Research aspects: the main research focus related to G-PCC and V-PCC is currently on compression efficiency but one should not dismiss its delivery aspects including its dynamic, adaptive streaming. A recent paper on this topic has been published in the IEEE Communications Magazine and is entitled "From Capturing to Rendering: Volumetric Media Delivery With Six Degrees of Freedom".

MPEG Finalizes the Harmonization of DASH and CMAF

MPEG successfully completed the harmonization of Dynamic Adaptive Streaming over HTTP (DASH) with Common Media Application Format (CMAF) featuring a DASH profile for the use with CMAF (as part of the 1st Amendment of ISO/IEC 23009-1:2019 4th edition).

CMAF and DASH segments are both based on the ISO Base Media File Format (ISOBMFF), which per se enables smooth integration of both technologies. Most importantly, this DASH profile defines (a) a normative mapping of CMAF structures to DASH structures and (b) how to use Media Presentation Description (MPD) as a manifest format.

Additional tools added to this amendment include
  • DASH events and timed metadata track timing and processing models with in-band event streams,
  • a method for specifying the resynchronization points of segments when the segments have internal structures that allow container-level resynchronization,
  • an MPD patch framework that allows the transmission of partial MPD information as opposed to the complete MPD using the XML patch framework as defined in IETF RFC 5261, and
  • content protection enhancements for efficient signaling.
It is expected that the 5th edition of the MPEG DASH standard (ISO/IEC 23009-1) containing this change will be issued at the 133rd MPEG meeting in January 2021. An overview of DASH standards/features can be found in the Figure below.
Research aspects: one of the features enabled by CMAF is low latency streaming that is actively researched within the multimedia systems community (e.g., here). The main research focus has been related to the ABR logic while its impact on the network is not yet fully understood and requires strong collaboration among stakeholders along the delivery path including ingest, encoding, packaging, (encryption), content delivery network (CDN), and consumption. A holistic view on ABR is needed to enable innovation and the next step towards the future generation of streaming technologies (https://athena.itec.aau.at/). 

MPEG Completes 2nd Edition of the Omnidirectional Media Format

MPEG completed the standardization of the 2nd edition of the Omnidirectional MediA Format (OMAF) by promoting ISO/IEC 23009-2 to Final Draft International Standard (FDIS) status including the following features:
  • “Late binding” technologies to deliver and present only that part of the content that adapts to the dynamically changing users' viewpoint. To enable an efficient implementation of such a feature, this edition of the specification introduces the concept of bitstream rewriting, in which a compliant bitstream is dynamically generated that, by combining the received portions of the bitstream, covers only the users' viewport on the client.
  • Extension of OMAF beyond 360-degree video. This edition introduces the concept of viewpoints, which can be considered as user-switchable camera positions for viewing content or as temporally contiguous parts of a storyline to provide multiple choices for the storyline a user can follow.
  • Enhances the use of video, image, or timed text overlays on top of omnidirectional visual background video or images related to a sphere or a viewport.
Research aspects: standards usually define formats to enable interoperability but various informative aspects are left open for industry competition and subject to research and development. The same holds for OMAF and its 2nd edition enables researchers and developers to work towards efficient viewport-adaptive implementations focusing on the users' viewport.

MPEG Completes the Low Complexity Enhancement Video Coding Standard

MPEG is pleased to announce the completion of the new ISO/IEC 23094-2 standard, i.e., Low Complexity Enhancement Video Coding (MPEG-5 Part 2 LCEVC), which has been promoted to Final Draft International Standard (FDIS) at the 132nd MPEG meeting.
  • LCEVC adds an enhancement data stream that can appreciably improve the resolution and visual quality of reconstructed video with an effective compression efficiency of limited complexity by building on top of existing and future video codecs.
  • LCEVC can be used to complement devices originally designed only for decoding the base layer bitstream, by using firmware, operating system, or browser support. It is designed to be compatible with existing video workflows (e.g., CDNs, metadata management, DRM/CA) and network protocols (e.g., HLS, DASH, CMAF) to facilitate the rapid deployment of enhanced video services.
  • LCEVC can be used to deliver higher video quality in limited bandwidth scenarios, especially when the available bit rate is low for high-resolution video delivery and decoding complexity is a challenge. Typical use cases include mobile streaming and social media, and services that benefit from high-density/low-power transcoding.
Research aspects: LCEVC provides a kind of scalable video coding by combining hardware- and software-based decoders that allows for a certain flexibility as part of regular software life cycle updates. However, LCEVC has been never compared to Scalable Video Coding (SVC) and Scalable High Efficiency Video Coding (SHVC) which could be an interesting aspect for future work.

The 133rd MPEG meeting will be again an online meeting in January 2021.

Click here for more information about MPEG meetings and their developments.

Monday, October 19, 2020

ACM MMSys 2021 Research Track - Call for Papers

ACM MMSys 2021 Research Track - Call for Papers


“Bridging Deep Media and Communications”
May 25-28, 2021, Istanbul, Turkey

Scope and Topics of Interest
The ACM Multimedia Systems Conference (MMSys) provides a forum for researchers to present and share their latest research findings in multimedia systems. While research about specific aspects of multimedia systems are regularly published in the various proceedings and transactions of the networking, operating systems, real-time systems, databases, mobile computing, distributed systems, computer vision, and middleware communities, MMSys aims to cut across these domains in the context of multimedia data types. This provides a unique opportunity to investigate the intersections and the interplay of the various approaches and solutions developed across these domains to deal with multimedia data types.

Such individual system components include:
  • Operating systems
  • Distributed architectures and protocols
  • Domain languages, development tools and abstraction layers
  • Using new architectures or computing resources for multimedia
  • New I/O architectures/devices, innovative uses, and algorithms
  • Representation of continuous or time-dependent media
  • Metrics and measurement tools to assess performance
This touches aspects of many hot topics including content preparation and (adaptive) delivery systems, HDR, games, virtual/augmented/mixed reality, 3D video, immersive systems, plenoptics, 360-degree/volumetric video delivery, multimedia Internet of Things (IoT), multi and many-core, GPGPUs, mobile multimedia and 5G, wearable multimedia, cloud-based multimedia, P2P, cyber-physical systems, multi-sensory experiences, smart cities and QoE.

We encourage submissions in the following Focus Areas:
  • Machine learning and statistical modeling for streaming
  • Volumetric media and collaborative immersive environments
  • Fake media and tools for preventing illegal broadcasts
Important Dates
  • Submission deadline: December 14, 2020 (firm deadline)
  • Acceptance notification: February 19, 2021
  • Camera-ready deadline: April 2, 2021
Submission Instructions

Online submission: https://mmsys2021.hotcrp.com/

Papers must be up to 12 pages plus optional pages for references only in PDF format prepared in the ACM style and written in English. Papers must be anonymised and not reveal any information about the authors.

MMSys papers enable authors to present entire multimedia systems or research work that builds on considerable amounts of earlier work in a self-contained manner. All submissions will be peer-reviewed by at least 3 TPC members and will be evaluated for their scientific quality. Authors will have a chance to submit their rebuttals before online discussions among the TPC members. MMSys'21 will also continue to support scientific reproducibility by implementing the ACM reproducibility badge system. Accepted papers will be published in the ACM Digital Library.

General Chairs
  • Özgü Alay (Simula Metropolitan and University of Oslo, Norway)
  • Cheng-Hsin Hsu (National Tsing Hua University, Taiwan)
  • Ali C. Begen (Ozyegin University and Networked Media, Turkey)
TPC Chairs
  • Lucile Sassatelli (Universite Cote d'Azur, France)
  • Feng Qian (University of Minnesota, USA)
Supporters

** Several travel grants will be offered **

Adobe, Ozyegin University, Turkish Airlines, Twitch, YouTube, Comcast, Medianova, MulticoreWare, AMD, Argela, Bigdata Teknoloji, Bitmovin, DASH-IF, Mux, Nokia, Pixery, SSIMWAVE, Streaming Video Alliance, Tencent, Unified Streaming, Ericsson, Interdigital, Sky

Follow Us
This call for papers in PDF: https://2021.acmmmsys.org/files/cfp_mmsys21.pdf

Wednesday, October 14, 2020

Happy World Standards Day 2020 - Protecting the planet with standards

Today on October 14, we celebrate the World Standards Day, "the day honors the efforts of the thousands of experts who develop voluntary standards within standards development organizations" (SDOs). Many SDOs such as W3C, IETF, ITU, ISO (incl. JPEG and MPEG) celebrate this with individual statements, highlighting the importance of standards and interoperability in today's information and communication technology landscape. Interestingly, this year's topic for the World Standards Day within ISO is about protecting the planet with standards (also here). I have also blogged about the World Standards Day in 2017 and 2019.

In this blog post, I'd like to highlight what MPEG can do to protect the planet (with standards). In general, each generation of video codec improves coding efficiency significantly (by approx. 50%) but with increased complexity that impacts compute/memory requirements. An overview of the video codecs can be found in the figure below and I would like specifically point to MPEG-2 (H.262 | 13818-2), AVC, HEVC, and VVC. 

History of international video coding standardization [full slide deck here].

The performance history of standard generations can be seen in the figure below which roughly indicates the 50% bitrate reduction at a given constant quality.
The performance history of standard generations [full slide deck here].

Furthermore, MPEG specified ISO/IEC 23001-11:2019 also referred to as "Energy-efficient media consumption (green metadata)" that specifies metadata for energy-efficient decoding, encoding, presentation, and selection of media. The actual specification can be purchased here and an overview can be found also here.

While it's true that streaming video accounts for the majority of today's internet traffic that even increased in the current COVID-19 pandemic, it's also true that "moving bits is easier than moving physical objects/bodies". Having that said, we are committed to further optimize resource allocation for all stages of video streaming from provisioning to consumption, e.g., as part of the ATHENA and APOLLO projects). In this context, we are organizing a special session at PCS'21 entitled "Video encoding for large scale HAS deployments" where we argue that optimizing video encoding for large scale HAS deployments is the next step in order to improve the Quality of Experience (QoE) while optimizing costs.

Since July 2020, MPEG is operating under a new structure and while writing this blog post, the 132nd MPEG meeting is taking place online discussing new standards according to its roadmap (see figure below). An overview/archive of my MPEG reports can be found here and the report for the 132nd MPEG meeting will be there also shortly after the MPEG meeting.

MPEG Roadmap as of July 2020.


Monday, October 5, 2020

One-Year-ATHENA: Happy Birthday

One year ago in October 2019 we started the Christian Doppler Labor ATHENA at the University of Klagenfurt which is funded by the Christian Doppler Research Association and Bitmovin. The aim of ATHENA is to research and develop novel paradigms, approaches, (prototype) tools, and evaluation results for the phases

  • multimedia content provisioning,
  • content delivery, and
  • content consumption in the media delivery chain as well as for
  • end-to-end aspects, with a focus on, but not being limited to, HTTP Adaptive Streaming (HAS).
https://athena.itec.aau.at/

Prior to starting ATHENA, we were confronted with hiring a team from scratch (i.e., 2 post-docs and 6 PhD students), which kept us very busy but eventually, we found the best team for ATHENA. Within a few months only, we managed to have the first ATHENA publication accepted at an international conference and we prepared for the first conference travels. Unfortunately, during this period COVID-19 reached us and we (all) experienced a major disruption in our daily life. I am very proud of my entire team about how we dealt with this unpleasant situation.

In this first year of ATHENA, we got 12* papers accepted for publication at international conferences/journals, which makes one publication per month on average; a truely remarkable result for a one year old project (out of seven years in total). Finally, I would like to thank our (inter)national collaborators for joining one or the other effort within ATHENA.

The list of ATHENA publications after this first year is as follows:
  1. From Capturing to Rendering: Volumetric Media Delivery With Six Degrees of Freedom“, IEEE Communication Magazine.
  2. Fast Multirate Encoding for HTTP Adaptive Streaming Using Machine Learning“, IEEE International Conference on Visual Communications and Image Processing (VCIP) 2020.
  3. Relevance-Based Compression of Cataract Surgery Videos Using Convolutional Neural Networks,” ACM International Conference on Multimedia 2020.
  4. QUALINET White Paper on Definitions of Immersive Media Experience (IMEx)”, European Network on Quality of Experience in Multimedia Systems and Services.
  5. Scalable High Efficiency Video Coding based HTTP Adaptive Streaming over QUIC Using Retransmission“, ACM SIGCOMM 2020 Workshop on Evolution, Performance, and Interoperability of QUIC (EPIQ 2020).
  6. Towards View-Aware Adaptive Streaming of Holographic Content,” 2020 IEEE International Conference on Multimedia & Expo Workshops (ICMEW).
  7. H2BR: An HTTP/2-based Retransmission Technique to Improve the QoE of Adaptive Video Streaming“, In Proceedings of the 25th ACM Workshop on Packet Video (PV ’20).
  8. CAdViSE: Cloud-based Adaptive Video Streaming Evaluation Framework for the Automated Testing of Media Players“, In Proceedings of the 11th ACM Multimedia Systems Conference (MMSys ’20).
  9. Objective and Subjective QoE Evaluation for Adaptive Point Cloud Streaming“, In 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX).
  10. Performance Analysis of ACTE: A Bandwidth Prediction Method for Low-Latency Chunked Streaming“, ACM Transactions on Multimedia Computing, Communications, and Applications
  11. On Optimizing Resource Utilization in AVC-based Real-time Video Streaming“, 2020 IEEE Conference on Network Softwarization.
  12. Multi-Period Per-Scene Optimization for HTTP Adaptive Streaming“, 2020 IEEE International Conference on Multimedia and Expo (ICME).
  13. Fast Multi-Rate Encoding for Adaptive HTTP Streaming“, 2020 Data Compression Conference (DCC).
* ... the QUALINET white paper is not published at any conference/journal and that's why it's excluded from the counting. Nevertheless, this white paper is an important piece of work from QUALINET from various contributors working in the area of immersive media experiences.

Saturday, August 1, 2020

MPEG news: a report from the 131st meeting (virtual)

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects. Additionally, this version of the blog post will be also posted at ACM SIGMM Records.


The 131st MPEG meeting concluded on July 3, 2020, online, again but with a press release comprising an impressive list of news items which is led by


MPEG Announces VVC – the Versatile Video Coding Standard


Just in the middle of the SC29 (i.e., MPEG’s parent body within ISO) restructuring process, MPEG successfully ratified -- jointly with ITU-T’s VCEG within JVET -- its next-generation video codec among other interesting results from the 131st MPEG meeting:


Standards progressing to final approval ballot (FDIS)

  • MPEG Announces VVC – the Versatile Video Coding Standard
  • Point Cloud Compression – MPEG promotes a Video-based Point Cloud Compression Technology to the FDIS stage
  • MPEG-H 3D Audio – MPEG promotes Baseline Profile for 3D Audio to the final stage

Call for Proposals

  • Call for Proposals on Technologies for MPEG-21 Contracts to Smart Contracts Conversion
  • MPEG issues a Call for Proposals on extension and improvements to ISO/IEC 23092 standard series

Standards progressing to the first milestone of the ISO standard development process

  • Widening support for storage and delivery of MPEG-5 EVC
  • Multi-Image Application Format adds support of HDR
  • Carriage of Geometry-based Point Cloud Data progresses to Committee Draft
  • MPEG Immersive Video (MIV) progresses to Committee Draft
  • Neural Network Compression for Multimedia Applications – MPEG progresses to Committee Draft
  • MPEG issues Committee Draft of Conformance and Reference Software for Essential Video Coding (EVC)

The corresponding press release of the 131st MPEG meeting can be found here: https://mpeg-standards.com/meetings/mpeg-131/. This report focused on video coding featuring VVC as well as PCC and systems aspects (i.e., file format, DASH).


MPEG Announces VVC – the Versatile Video Coding Standard


MPEG is pleased to announce the completion of the new Versatile Video Coding (VVC) standard at its 131st meeting. The document has been progressed to its final approval ballot as ISO/IEC 23090-3 and will also be known as H.266 in the ITU-T.


VVC Architecture (from IEEE ICME 2020 tutorial of Mathias Wien and Benjamin Bross)


VVC is the latest in a series of very successful standards for video coding that have been jointly developed with ITU-T, and it is the direct successor to the well-known and widely used High Efficiency Video Coding (HEVC) and Advanced Video Coding (AVC) standards (see architecture in the figure above). VVC provides a major benefit in compression over HEVC. Plans are underway to conduct a verification test with formal subjective testing to confirm that VVC achieves an estimated 50% bit rate reduction versus HEVC for equal subjective video quality. Test results have already demonstrated that VVC typically provides about a 40%-bit rate reduction for 4K/UHD video sequences in tests using objective metrics (i.e., PSNR, VMAF, MS-SSIM). Application areas especially targeted for the use of VVC include

  • ultra-high definition 4K and 8K video,
  • video with a high dynamic range and wide colour gamut, and
  • video for immersive media applications such as 360° omnidirectional video.

Furthermore, VVC is designed for a wide variety of types of video such as camera captured, computer-generated, and mixed content for screen sharing, adaptive streaming, game streaming, video with scrolling text, etc. Conventional standard-definition and high-definition video content are also supported with similar gains in compression. In addition to improving coding efficiency, VVC also provides highly flexible syntax supporting such use cases as (i) subpicture bitstream extraction, (ii) bitstream merging, (iii) temporal sublayering, and (iv) layered coding scalability.


The current performance of VVC compared to HEVC-HM is shown in the figure below which confirms the statement above but also highlights the increased complexity. Please note that VTM9 is not optimized for speed but functionality (i.e., compression efficiency).


Performance of VVC, VTM9 vs. HM (taken from https://bit.ly/mpeg131).


MPEG also announces completion of ISO/IEC 23002-7 “Versatile supplemental enhancement information for coded video bitstreams” (VSEI), developed jointly with ITU-T as Rec. ITU-T H.274. The new VSEI standard specifies the syntax and semantics of video usability information (VUI) parameters and supplemental enhancement information (SEI) messages for use with coded video bitstreams. VSEI is especially intended for use with VVC, although it is drafted to be generic and flexible so that it may also be used with other types of coded video bitstreams. Once specified in VSEI, different video coding standards and systems-environment specifications can re-use the same SEI messages without the need for defining special-purpose data customized to the specific usage context.


At the same time, the Media Coding Industry Forum (MC-IF) announces a VVC patent pool fostering with an initial meeting on September 1, 2020. The aim of this meeting is to identify tasks and to propose a schedule for VVC pool fostering with the goal to select a pool facilitator/administrator by the end of 2020. MC-IF is not facilitating or administering a patent pool.


At the time of writing this blog post, it is probably too early to make an assessment of whether VVC will share the fate of HEVC or AVC (w.r.t. patent pooling). AVC is still the most widely used video codec but with AVC, HEVC, EVC, VVC, LCEVC, AV1, (AV2), and probably also AVS3 -- did I miss anything? -- the competition and pressure are certainly increasing.


Research aspects: from a research perspective, reduction of time-complexity (for a variety of use cases) while maintaining quality and bitrate at acceptable levels is probably the most relevant aspect. Improvements in individual building blocks of VVC by using artificial neural networks (ANNs) are another area of interest but also end-to-end aspects of video coding using ANNs will probably pave the roads towards the/a next generation of video codec(s). Utilizing VVC and its features for HTTP adaptive streaming (HAS) is probably most interesting for me but maybe also for others...

MPEG promotes a Video-based Point Cloud Compression Technology to the FDIS stage

At its 131st meeting, MPEG promoted its Video-based Point Cloud Compression (V-PCC) standard to the Final Draft International Standard (FDIS) stage. V-PCC addresses lossless and lossy coding of 3D point clouds with associated attributes such as colors and reflectance. Point clouds are typically represented by extremely large amounts of data, which is a significant barrier for mass-market applications. However, the relative ease to capture and render spatial information as point clouds compared to other volumetric video representations makes point clouds increasingly popular to present immersive volumetric data. With the current V-PCC encoder implementation providing compression in the range of 100:1 to 300:1, a dynamic point cloud of one million points could be encoded at 8 Mbit/s with good perceptual quality. Real-time decoding and rendering of V-PCC bitstreams have also been demonstrated on current mobile hardware.
The V-PCC standard leverages video compression technologies and the video ecosystem in general (hardware acceleration, transmission services, and infrastructure) while enabling new kinds of applications. The V-PCC standard contains several profiles that leverage existing AVC and HEVC implementations, which may make them suitable to run on existing and emerging platforms. The standard is also extensible to upcoming video specifications such as Versatile Video Coding (VVC) and Essential Video Coding (EVC).

The V-PCC standard is based on Visual Volumetric Video-based Coding (V3C), which is expected to be re-used by other MPEG-I volumetric codecs under development. MPEG is also developing a standard for the carriage of V-PCC and V3C data (ISO/IEC 23090-10) which has been promoted to DIS status at the 130th MPEG meeting.

By providing high-level immersiveness at currently available bandwidths, the V-PCC standard is expected to enable several types of applications and services such as six Degrees of Freedom (6 DoF) immersive media, virtual reality (VR) / augmented reality (AR), immersive real-time communication and cultural heritage.

Research aspects: as V-PCC is video-based, we can probably state similar research aspects as for video codecs such as improving efficiency both for encoding and rendering as well as reduction of time complexity. During the development of V-PCC mainly HEVC (and AVC) has/have been used but it is definitely interesting to use also VVC for PCC. Finally, the dynamic adaptive streaming of V-PCC data is still in its infancy despite some articles published here and there.

MPEG Systems related News

Finally, I’d like to share news related to MPEG systems and the carriage of video data as depicted in the figure below. In particular, the carriage of VVC (and also EVC) has been now enabled in MPEG-2 Systems (specifically within the transport stream) and in the various file formats (specifically within the NAL file format). The latter is used also in CMAF and DASH which makes VVC (and also EVC) ready for HTTP adaptive streaming (HAS).

Carriage of Video in MPEG Systems Standards (taken from https://bit.ly/mpeg131).

What about DASH and CMAF?

CMAF maintains a so-called "technologies under consideration" document which contains -- among other things -- a proposed VVC CMAF profile. Additionally, there are two exploration activities related to CMAF, i.e., (i) multi-stream support and (ii) storage, archiving, and content management for CMAF files.

DASH works on potential improvement for the first amendment to ISO/IEC 23009-1 4th edition related to CMAF support, events processing model, and other extensions. Additionally, there’s a working draft for a second amendment to ISO/IEC 23009-1 4th edition enabling bandwidth change signaling track and other enhancements. Furthermore, ISO/IEC 23009-8 (Session-based DASH operations) has been advanced to Draft International Standard (see also my last report).

An overview of the current status of MPEG-DASH can be found in the figure below.


The next meeting will be again an online meeting in October 2020.

Finally, MPEG organized a Webinar presenting results from the 131st MPEG meeting. The slides and video recordings are available here: https://bit.ly/mpeg131.

Click here for more information about MPEG meetings and their developments.

Thursday, July 16, 2020

MPEG131 Press Release (Index): WG11 (MPEG) Announces VVC – the Versatile Video Coding Standard

WG11 (MPEG) Announces VVC – the Versatile Video Coding Standard


The 131st WG 11 (MPEG) meeting was held online, 29 June – 3 July 2020

Table of Contents

Standards progressing to final approval ballot (FDIS)
Call for Proposals
Standards progressing to the first milestone of the ISO standard development process
Webinar: What’s new in MPEG?

MPEG cordially invites to its first webinar: What's new in MPEG? A brief update about the results of its 131st MPEG meeting featuring:
  • Welcome and Introduction: Jörn Ostermann, Acting Convenor of WG11 (MPEG)
  • Versatile Video Coding (VVC): Jens-Rainer Ohm and Gary Sullivan, JVET Chairs
  • MPEG 3D Audio: Schuyler Quackenbusch, MPEG Audio Chair
  • Video-based Point Cloud Compression (V-PCC): Marius, Preda, MPEG 3DG Chair
  • MPEG Immersive Video (MIV): Bart Kroon, MPEG Video BoG Chair
  • Carriage of Versatile Video Coding (VVC) and Enhanced Video Coding (EVC): Young-Kwon Lim, MPEG Systems Chair
  • MPEG Roadmap: Jörn Ostermann, Acting Convenor of WG11 (MPEG)
When: Tuesday, July 21, 2020, 10:00 UTC and 21:00 UTC (to accommodate different time zones)
How: Please register here https://bit.ly/mpeg131. Q&A via sli.do (https://app.sli.do/event/xpzpkhlm; event # 54597) starting from July 21, 2020.

How to contact WG 11 (MPEG) and Further Information

Journalists that wish to receive WG 11 (MPEG) Press Releases by email should contact Dr. Christian Timmerer at christian.timmerer@itec.uni-klu.ac.at or christian.timmerer@bitmovin.com or subscribe via https://lists.aau.at/mailman/listinfo/mpeg-pr. For timely updates follow us on Twitter (https://twitter.com/mpeggroup).

Future WG 11 (MPEG) meetings are planned as follows: 
  • No. 132, Online, 12 – 16 October 2020
  • No. 133, Cape Town, ZA, 11 – 15 January 2021
  • No. 134, Geneva, CH, 26 – 30 April 2021
  • No. 135, Prague, CZ, 12 – 16 July 2021
For further information about WG 11 (MPEG), please contact:

Prof. Dr.-Ing. Jörn Ostermann (Convenor of WG 11 (MPEG), Germany)
Leibniz Universität Hannover
Appelstr. 9A
30167 Hannover, Germany
Tel: ++49 511 762 5316
Fax: ++49 511 762 5333

or

Priv.-Doz. Dr. Christian Timmerer
Alpen-Adria-Universität Klagenfurt | Bitmovin Inc.
9020 Klagenfurt am Wörthersee, Austria, Europe
Tel: +43 463 2700 3621

MPEG131 Press Release: WG11 (MPEG) issues Committee Draft of Conformance and Reference Software for Essential Video Coding (EVC)

MPEG131 Press Release: Index

WG11 (MPEG) issues Committee Draft of Conformance and Reference Software for Essential Video Coding (EVC)

At its 131st meeting, WG11 (MPEG) promoted the specification of the Conformance and Reference Software for Essential Video Coding (ISO/IEC 23094-4) to Committee Draft (CD) level. The Essential Video Coding (EVC) standard (ISO/IEC 23094-1) provides an improved compression capability over existing video coding standards with the timely publication of licensing terms. The issued specification of the Conformance and Reference Software for Essential Video Coding includes conformance bitstreams as well as a reference software for the generation of those conformance bitstreams. This important standard will greatly help the industry achieve effective interoperability between products using EVC and provide valuable information to ease the development of such products. The final specification is expected to be available in early 2021.