Sunday, August 22, 2021

Special issue on Open Media Compression: Overview, Design Criteria, and Outlook on Emerging Standards

Special issue on Open Media Compression: Overview, Design Criteria, and Outlook on Emerging Standards

Proceedings of the IEEE, vol. 109, no. 9, Sept. 2021

By CHRISTIAN TIMMERER, Senior Member IEEE
Guest Editor
MATHIAS WIEN, Member IEEE
Guest Editor
LU YU, Senior Member IEEE
Guest Editor
AMY REIBMAN, Fellow IEEE Guest Editor


Abstract
: Multimedia content (i.e., video, image, audio) is responsible for the majority of today’s Internet traffic and numbers are expecting to grow beyond 80% in the near future. For more than 30 years, international standards provide tools for interoperability and are both source and sink for challenging research activities in the domain of multimedia compression and system technologies. The goal of this special issue is to review those standards and focus on (i) the technology developed in the context of these standards and (ii) research questions addressing aspects of these standards which are left open for competition by both academia and industry.

Index Terms—Open Media Standards, MPEG, JPEG, JVET, AOM, Computational Complexity

C. Timmerer, M. Wien, L. Yu and A. Reibman, "Special issue on Open Media Compression: Overview, Design Criteria, and Outlook on Emerging Standards," in Proceedings of the IEEE, vol. 109, no. 9, pp. 1423-1434, Sept. 2021, doi: 10.1109/JPROC.2021.3098048.


A Technical Overview of AV1

J. Han et al., "A Technical Overview of AV1," in Proceedings of the IEEE, vol. 109, no. 9, pp. 1435-1462, Sept. 2021, doi: 10.1109/JPROC.2021.3058584.

Abstract: The AV1 video compression format is developed by the Alliance for Open Media consortium. It achieves more than a 30% reduction in bit rate compared to its predecessor VP9 for the same decoded video quality. This article provides a technical overview of the AV1 codec design that enables the compression performance gains with considerations for hardware feasibility.

Developments in International Video Coding Standardization After AVC, With an Overview of Versatile Video Coding (VVC)

B. Bross, J. Chen, J. -R. Ohm, G. J. Sullivan and Y. -K. Wang, "Developments in International Video Coding Standardization After AVC, With an Overview of Versatile Video Coding (VVC)," in Proceedings of the IEEE, vol. 109, no. 9, pp. 1463-1493, Sept. 2021, doi: 10.1109/JPROC.2020.3043399.

Abstract: In the last 17 years, since the finalization of the first version of the now-dominant H.264/Moving Picture Experts Group-4 (MPEG-4) Advanced Video Coding (AVC) standard in 2003, two major new generations of video coding standards have been developed. These include the standards known as High Efficiency Video Coding (HEVC) and Versatile Video Coding (VVC). HEVC was finalized in 2013, repeating the ten-year cycle time set by its predecessor and providing about 50% bit-rate reduction over AVC. The cycle was shortened by three years for the VVC project, which was finalized in July 2020, yet again achieving about a 50% bit-rate reduction over its predecessor (HEVC). This article summarizes these developments in video coding standardization after AVC. It especially focuses on providing an overview of the first version of VVC, including comparisons against HEVC. Besides further advances in hybrid video compression, as in previous development cycles, the broad versatility of the application domain that is highlighted in the title of VVC is explained. Included in VVC is the support for a wide range of applications beyond the typical standard- and high-definition camera-captured content codings, including features to support computer-generated/screen content, high dynamic range content, multilayer and multiview coding, and support for immersive media such as 360° video.

Advances in Video Compression System Using Deep Neural Network: A Review and Case Studies

D. Ding, Z. Ma, D. Chen, Q. Chen, Z. Liu and F. Zhu, "Advances in Video Compression System Using Deep Neural Network: A Review and Case Studies," in Proceedings of the IEEE, vol. 109, no. 9, pp. 1494-1520, Sept. 2021, doi: 10.1109/JPROC.2021.3059994.

Abstract: Significant advances in video compression systems have been made in the past several decades to satisfy the near-exponential growth of Internet-scale video traffic. From the application perspective, we have identified three major functional blocks, including preprocessing, coding, and postprocessing, which have been continuously investigated to maximize the end-user quality of experience (QoE) under a limited bit rate budget. Recently, artificial intelligence (AI)-powered techniques have shown great potential to further increase the efficiency of the aforementioned functional blocks, both individually and jointly. In this article, we review recent technical advances in video compression systems extensively, with an emphasis on deep neural network (DNN)-based approaches, and then present three comprehensive case studies. On preprocessing, we show a switchable texture-based video coding example that leverages DNN-based scene understanding to extract semantic areas for the improvement of a subsequent video coder. On coding, we present an end-to-end neural video coding framework that takes advantage of the stacked DNNs to efficiently and compactly code input raw videos via fully data-driven learning. On postprocessing, we demonstrate two neural adaptive filters to, respectively, facilitate the in-loop and postfiltering for the enhancement of compressed frames. Finally, a companion website hosting the contents developed in this work can be accessed publicly at https://purdueviper.github.io/dnn-coding/.

MPEG Immersive Video Coding Standard

J. M. Boyce et al., "MPEG Immersive Video Coding Standard," in Proceedings of the IEEE, vol. 109, no. 9, pp. 1521-1536, Sept. 2021, doi: 10.1109/JPROC.2021.3062590.

Abstract: This article introduces the ISO/IEC MPEG Immersive Video (MIV) standard, MPEG-I Part 12, which is undergoing standardization. The draft MIV standard provides support for viewing immersive volumetric content captured by multiple cameras with six degrees of freedom (6DoF) within a viewing space that is determined by the camera arrangement in the capture rig. The bitstream format and decoding processes of the draft specification along with aspects of the Test Model for Immersive Video (TMIV) reference software encoder, decoder, and renderer are described. The use cases, test conditions, quality assessment methods, and experimental results are provided. In the TMIV, multiple texture and geometry views are coded as atlases of patches using a legacy 2-D video codec, while optimizing for bitrate, pixel rate, and quality. The design of the bitstream format and decoder is based on the visual volumetric video-based coding (V3C) and video-based point cloud compression (V-PCC) standard, MPEG-I Part 5.

Compression of Sparse and Dense Dynamic Point Clouds—Methods and Standards

C. Cao, M. Preda, V. Zakharchenko, E. S. Jang and T. Zaharia, "Compression of Sparse and Dense Dynamic Point Clouds—Methods and Standards," in Proceedings of the IEEE, vol. 109, no. 9, pp. 1537-1558, Sept. 2021, doi: 10.1109/JPROC.2021.3085957.

Abstract: In this article, a survey of the point cloud compression (PCC) methods by organizing them with respect to the data structure, coding representation space, and prediction strategies is presented. Two paramount families of approaches reported in the literature—the projection- and octree-based methods—are proven to be efficient for encoding dense and sparse point clouds, respectively. These approaches are the pillars on which the Moving Picture Experts Group Committee developed two PCC standards published as final international standards in 2020 and early 2021, respectively, under the names: video-based PCC and geometry-based PCC. After surveying the current approaches for PCC, the technologies underlying the two standards are described in detail from an encoder perspective, providing guidance for potential standard implementors. In addition, experiment evaluations in terms of compression performances for both solutions are provided.

JPEG XS—A New Standard for Visually Lossless Low-Latency Lightweight Image Coding

A. Descampe et al., "JPEG XS—A New Standard for Visually Lossless Low-Latency Lightweight Image Coding," in Proceedings of the IEEE, vol. 109, no. 9, pp. 1559-1577, Sept. 2021, doi: 10.1109/JPROC.2021.3080916.

Abstract: Joint Photographic Experts Group (JPEG) XS is a new International Standard from the JPEG Committee (formally known as ISO/International Electrotechnical Commission (IEC) JTC1/SC29/WG1). It defines an interoperable, visually lossless low-latency lightweight image coding that can be used for mezzanine compression within any AV market. Among the targeted use cases, one can cite video transport over professional video links (serial digital interface (SDI), internet protocol (IP), and Ethernet), real-time video storage, memory buffers, omnidirectional video capture and rendering, and sensor compression (for example, in cameras and the automotive industry). The core coding system is composed of an optional color transform, a wavelet transform, and a novel entropy encoder, processing groups of coefficients by coding their magnitude level and packing the magnitude refinement. Such a design allows for visually transparent quality at moderate compression ratios, scalable end-to-end latency that ranges from less than one line to a maximum of 32 lines of the image, and a low-complexity real-time implementation in application-specific integrated circuit (ASIC), field-programmable gate array (FPGA), central processing unit (CPU), and graphics processing unit (GPU). This article details the key features of this new standard and the profiles and formats that have been defined so far for the various applications. It also gives a technical description of the core coding system. Finally, the latest performance evaluation results of recent implementations of the standard are presented, followed by the current status of the ongoing standardization process and future milestones.

MPEG Standards for Compressed Representation of Immersive Audio

S. R. Quackenbush and J. Herre, "MPEG Standards for Compressed Representation of Immersive Audio," in Proceedings of the IEEE, vol. 109, no. 9, pp. 1578-1589, Sept. 2021, doi: 10.1109/JPROC.2021.3075390.

Abstract: The term “immersive audio” is frequently used to describe an audio experience that provides the listener the sensation of being fully immersed or “present” in a sound scene. This can be achieved via different presentation modes, such as surround sound (several loudspeakers horizontally arranged around the listener), 3D audio (with loudspeakers at, above, and below listener ear level), and binaural audio to headphones. This article provides an overview of two recent standards that support the bitrate-efficient carriage of high-quality immersive sound. The first is MPEG-H 3D audio, which is a versatile standard that supports multiple immersive sound signal formats (channels, objects, and higher order ambisonics) and is now being adopted in broadcast and streaming applications. The second is MPEG-I immersive audio, an extension of 3D audio, currently under development, which is targeted for virtual and augmented reality applications. This will support rendering of fully user-interactive immersive sound for three degrees of user movement [three degrees of freedom (3DoF)], i.e., yaw, pitch, and roll head movement, and for six degrees of user movement [six degrees of freedom (6DoF)], i.e., 3DoF plus translational x, y, and z user position movements.

An Overview of Omnidirectional MediA Format (OMAF)

M. M. Hannuksela and Y. -K. Wang, "An Overview of Omnidirectional MediA Format (OMAF)," in Proceedings of the IEEE, vol. 109, no. 9, pp. 1590-1606, Sept. 2021, doi: 10.1109/JPROC.2021.3063544.

Abstract: During recent years, there have been product launches and research for enabling immersive audio–visual media experiences. For example, a variety of head-mounted displays and 360° cameras are available in the market. To facilitate interoperability between devices and media system components by different vendors, the Moving Picture Experts Group (MPEG) developed the Omnidirectional MediA Format (OMAF), which is arguably the first virtual reality (VR) system standard. OMAF is a storage and streaming format for omnidirectional media, including 360° video and images, spatial audio, and associated timed text. This article provides a comprehensive overview of OMAF.

An Introduction to MPEG-G: The First Open ISO/IEC Standard for the Compression and Exchange of Genomic Sequencing Data

J. Voges, M. Hernaez, M. Mattavelli and J. Ostermann, "An Introduction to MPEG-G: The First Open ISO/IEC Standard for the Compression and Exchange of Genomic Sequencing Data," in Proceedings of the IEEE, vol. 109, no. 9, pp. 1607-1622, Sept. 2021, doi: 10.1109/JPROC.2021.3082027.

Abstract: The development and progress of high-throughput sequencing technologies have transformed the sequencing of DNA from a scientific research challenge to practice. With the release of the latest generation of sequencing machines, the cost of sequencing a whole human genome has dropped to less than $ 600. Such achievements open the door to personalized medicine, where it is expected that genomic information of patients will be analyzed as a standard practice. However, the associated costs, related to storing, transmitting, and processing the large volumes of data, are already comparable to the costs of sequencing. To support the design of new and interoperable solutions for the representation, compression, and management of genomic sequencing data, the Moving Picture Experts Group (MPEG) jointly with working group 5 of ISO/TC276 “Biotechnology” has started to produce the ISO/IEC 23092 series, known as MPEG-G. MPEG-G does not only offer higher levels of compression compared with the state of the art but it also provides new functionalities, such as built-in support for random access in the compressed domain, support for data protection mechanisms, flexible storage, and streaming capabilities. MPEG-G only specifies the decoding syntax of compressed bitstreams, as well as a file format and a transport format. This allows for the development of new encoding solutions with higher degrees of optimization while maintaining compatibility with any existing MPEG-G decoder.

Saturday, August 21, 2021

MPEG AG 5 Workshop on Quality of Immersive Media: Assessment and Metrics

The Quality of Experience (QoE) is well-defined in QUALINET white papers [here, here], but its assessment and metrics are subject to research. The aim of this workshop on “Quality of Immersive Media: Assessment and Metrics” is to provide a forum for researchers and practitioners to discuss the latest findings in this field. The scope of this workshop is (i) to raise awareness about MPEG efforts in the context of quality of immersive visual media and (ii) invite experts (outside of MPEG) to present new techniques relevant to this workshop.

Quality assessments in the context of the MPEG standardization process typically serve two purposes: (1) to foster decision-making on the tool adoptions during the standardization process and (2) to validate the outcome of a standardization effort compared to an established anchor (i.e., for verification testing).

We kindly invite you to the first online MPEG AG 5 Workshop on Quality of Immersive Media: Assessment and Metrics as follows.

Logistics (online):

  • Date: October 5, 2021
  • Time slot: 1500-1700 UTC
  • Zoom (video recording available below)

Program/Speakers:

15:00-15:10: Joel Jung & Christian Timmerer (AhG co-chairs): Welcome notice

15:10-15:30: Mathias Wien (AG 5 convenor): MPEG Visual Quality Assessment: Tasks and Perspectives
Abstract: The Advisory Group on MPEG Visual Quality Assessment (ISO/IEC JTC1 SC29/AG5) has been founded in 2020 with the goal to select and design subjective quality evaluation methodologies and objective quality metrics for the assessment of visual coding technologies in the context of the MPEG standardization work. In this talk, the current work items, as well as perspectives and first achievements of the group, are presented.

15:30-15:50: Aljosa Smolic: Perception and Quality of Immersive Media
Abstract: Interest in immersive media increased significantly over recent years. Besides applications in entertainment, culture, health, industry, etc., telepresence and remote collaboration gained importance due to the pandemic and climate crisis. Immersive media have the potential to increase social integration and to reduce greenhouse gas emissions. As a result, technologies along the whole pipeline from capture to display are maturing and applications are becoming available, creating business opportunities. One aspect of immersive technologies that is still relatively undeveloped is the understanding of perception and quality, including subjective and objective assessment. The interactive nature of immersive media poses new challenges to estimation of saliency or visual attention, and to the development of quality metrics. The V-SENSE lab of Trinity College Dublin addresses these questions in current research. This talk will highlight corresponding examples in 360 VR video, light fields, volumetric video and XR.

15:50-16:00: Break/Discussions

16:00-16:20: Jesús Gutiérrez: Quality assessment of immersive media: Recent activities within VQEG
Abstract: This presentation will provide an overview of the recent activities carried out on quality assessment of immersive media within the Video Quality Experts Group (VQEG), particularly within the Immersive Media Group (IMG). Among other efforts, outcomes will be presented from the cross-lab test (carried out by ten different labs) in order to assess and validate subjective evaluation methodologies for 360º videos, which was instrumental in the development of the ITU-T Recommendation P.919. Also, insights will be provided on the current plans on exploring the evaluation of the quality of experience of immersive communication systems, considering different technologies such as 360º video, point cloud, free-viewpoint video, etc.

16:20-16:40: Alexander Raake: Perceptual evaluation of Immersive Media - from video quality towards a holistic QoE perspective
Abstract: Immersive visual media spans from higher-resolution video with increased field of view to fully interactive extended reality (XR) systems based on VR, AR, or MR technology. Here, quality and Quality of Experience (QoE) evaluation are key to ensure valuable experiences for the users and thus successful technology developments. The talk presents some work in ITU-T SG 12 on the assessment of immersive media, and corresponding contributions and other related research activities by the Audiovisual Technology (AVT) group at TU Ilmenau. In the first part of the talk, the quality model series P.1203 and P.1204 for resolutions of up to 4K/UHD1 will be presented, with a primary focus on the bitstream-based models P.1203.1 and P.1204.3. Besides their application to 2D video, their usage for gaming-video and 360° video quality assessment are addressed. In the second part, the talk discusses aspects of QoE for immersive media that go beyond visual quality. Research is presented on the exploration behavior of users for 360° video, showing the influence due to the content as well as the task given to the subjects. Furthermore, some recent work on presence and cybersickness evaluation for 360° video is discussed. The talk concludes with an outlook on using indirect methods and cognitive performances as evaluation criteria for audiovisual IVEs.

16:40-17:00: Laura Toni: Understanding user interactivity for immersive communications and its impact on QoE 
Abstract: A major challenge for the next decade is to design virtual and augmented reality systems for real-world use cases such as healthcare, entertainment, e-education, and high-risk missions. This requires immersive systems that operate at scale, in a personalized manner, remaining bandwidth-tolerant whilst meeting quality and latency criteria. This can be accomplished only by a  fundamental revolution of the network and immersive systems that has to put the interactive user at the heart of the system rather than at the end of the chain. With this goal in mind, in this talk, we provide an overview of our current researches on the behaviour of interactive users in immersive experiences and its impact on the next-generation multimedia systems. We present novel tools for behavioural analysis of users navigating in 3-DoF and 6-DoF systems, we show the impact and advantages of taking into account user behaviour in immersive systems. We then conclude with a perspective on the impact of users behaviour studies into QoE.

17:00: Conclusions and Discussions

Saturday, August 14, 2021

PSTR: Per-title encoding using Spatio-Temporal Resolutions

 PSTR: Per-title encoding using Spatio-Temporal Resolutions

IEEE International Conference on Multimedia and Expo (ICME)

5-9 July 2021, Shenzhen, China

Hadi Amirpour, Christian Timmerer, and Mohammad Ghanbari

Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt

Abstract: Current per-title encoding schemes encode the same video content (or snippets/subsets thereof) at various bitrates and spatial resolutions to find an optimal bitrate ladder for each video content. Compared to traditional approaches, in which a predefined, content-agnostic (“fit-to-all”) encoding ladder is applied to all video contents, per-title encoding can result in (i) a significant decrease of storage and delivery costs and (ii) an increase in the Quality of Experience. In the current per-title encoding schemes, the bitrate ladder is optimized using only spatial resolutions, while we argue that with the emergence of high framerate videos, this principle can be extended to temporal resolutions as well. In this paper, we improve the per-title encoding for each content using spatio-temporal resolutions. Experimental results show that our proposed approach doubles the performance of bitrate saving by considering both temporal and spatial resolutions compared to considering only spatial resolutions.

Keywords: Bitrate ladder, per-title encoding, framerate, spatial resolution.

Acknowledgements: The financial support of the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association, is gratefully acknowledged. Christian Doppler Laboratory ATHENA: https://athena.itec.aau.at/.


Thursday, August 12, 2021

MMM’21: Towards Optimal Multirate Encoding for HTTP Adaptive Streaming

 Towards Optimal Multirate Encoding for HTTP Adaptive Streaming

The International MultiMedia Modeling Conference (MMM)
June 22-24, 2021, Prague, Czech Republic

Hadi AmirpourEkrem Çetinkaya, Christian Timmerer, and Mohammad Ghanbari
Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt

Abstract: HTTP Adaptive Streaming (HAS) enables high quality stream-ing of video contents. In HAS, videos are divided into short intervals called segments, and each segment is encoded at various quality/bitrates to adapt to the available bandwidth. Multiple encodings of the same con-tent imposes high cost for video content providers. To reduce the time-complexity of encoding multiple representations, state-of-the-art methods typically encode the highest quality representation first and reuse the information gathered during its encoding to accelerate the encoding of the remaining representations. As encoding the highest quality representation requires the highest time-complexity compared to the lower quality representations, it would be a bottleneck in parallel encoding scenarios and the overall time-complexity will be limited to the time-complexity of the highest quality representation. In this paper and to address this problem, we consider all representations from the highest to the lowest quality representation as a potential, single reference to accelerate the encoding of the other, dependent representations. We formulate a set of encoding modes and assess their performance in terms of BD-Rate and time-complexity, using both VMAF and PSNR as objective metrics. Experimental results show that encoding a middle quality representation as a reference, can significantly reduce the maximum en-coding complexity and hence it is an efficient way of encoding multiple representations in parallel. Based on this fact, a fast multirate encoding method is proposed which utilizes depth and prediction mode of a middle quality representation to accelerate the encoding of the dependent representations.

Keywords: HEVC, Video Encoding, Multirate Encoding, DASH

Acknowledgements: The financial support of the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association, is gratefully acknowledged. Christian Doppler Laboratory ATHENA: https://athena.itec.aau.at/.

Tuesday, August 10, 2021

DCC’21: SLFC: Scalable Light Field Coding

 SLFC: Scalable Light Field Coding

 Data Compression Conference (DCC)
23-26 March 2021, Snowbird, Utah, USA

Hadi Amirpour, Christian Timmerer, and Mohammad Ghanbari
Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt

Abstract: Light field imaging enables some post-processing capabilities like refocusing, changing view perspective, and depth estimation. As light field images are represented by multiple views they contain a huge amount of data that makes compression inevitable. Although there are some proposals to efficiently compress light field images, their main focus is on encoding efficiency. However, some important functionalities such as viewpoint and quality scalabilities, random access, and uniform quality distribution have not been addressed adequately. In this paper, an efficient light field image compression method based on a deep neural network is proposed, which classifies multiple views into various layers. In each layer, the target view is synthesized from the available views of previously encoded/decoded layers using a deep neural network. This synthesized view is then used as a virtual reference for the target view inter-coding. In this way, random access to an arbitrary view is provided. Moreover, uniform quality distribution among multiple views is addressed. In higher bitrates where random access to an arbitrary view is more crucial, the required bitrate to access the requested view is minimized.

Keywords: Light field, Compression, Scalable, Random Access.

Acknowledgements: The financial support of the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association, is gratefully acknowledged. Christian Doppler Laboratory ATHENA: https://athena.itec.aau.at/.


Tuesday, August 3, 2021

MPEG news: a report from the 135th meeting (virtual)

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects. Additionally, this version of the blog post will also be posted at ACM SIGMM Records.

MPEG News Archive

The 135th MPEG meeting was once again held as an online meeting, and the official press release can be found here and comprises the following items:

  • MPEG Video Coding promotes MPEG Immersive Video (MIV) to the FDIS stage
  • Verification tests for more application cases of Versatile Video Coding (VVC)
  • MPEG Systems reaches first milestone for Video Decoding Interface for Immersive Media
  • MPEG Systems further enhances the extensibility and flexibility of Network-based Media Processing
  • MPEG Systems completes support of Versatile Video Coding and Essential Video Coding in High Efficiency Image File Format
  • Two MPEG White Papers:
    • Versatile Video Coding (VVC)
    • MPEG-G and its application of regulation and privacy

In this column, I’d like to focus on MIV and VVC including systems-related aspects as well as a brief update about DASH (as usual).

MPEG Immersive Video (MIV)

At the 135th MPEG meeting, MPEG Video Coding has promoted the MPEG Immersive Video (MIV) standard to the Final Draft International Standard (FDIS) stage. MIV was developed to support compression of immersive video content in which multiple real or virtual cameras capture a real or virtual 3D scene. The standard enables storage and distribution of immersive video content over existing and future networks for playback with 6 Degrees of Freedom (6DoF) of view position and orientation.

From a technical point of view, MIV is a flexible standard for multiview video with depth (MVD) that leverages the strong hardware support for commonly used video codecs to code volumetric video. The actual views may choose from three projection formats: (i) equirectangular, (ii) perspective, or (iii) orthographic. By packing and pruning views, MIV can achieve bit rates around 25 Mb/s and a pixel rate equivalent to HEVC Level 5.2.

The MIV standard is designed as a set of extensions and profile restrictions for the Visual Volumetric Video-based Coding (V3C) standard (ISO/IEC 23090-5). The main body of this standard is shared between MIV and the Video-based Point Cloud Coding (V-PCC) standard (ISO/IEC 23090-5 Annex H). It may potentially be used by other MPEG-I volumetric codecs under development. The carriage of MIV is specified through the Carriage of V3C Data standard (ISO/IEC 23090-10).

The test model and objective metrics are publicly available at https://gitlab.com/mpeg-i-visual.

At the same time, MPEG Systems has begun developing the Video Decoding Interface for Immersive Media (VDI) standard (ISO/IEC 23090-13) for a video decoders’ input and output interfaces to provide more flexible use of the video decoder resources for such applications. At the 135th MPEG meeting, MPEG Systems has reached the first formal milestone of developing the ISO/IEC 23090-13 standard by promoting the text to Committee Draft ballot status. The VDI standard allows for dynamic adaptation of video bitstreams to provide the decoded output pictures in such a way so that the number of actual video decoders can be smaller than the number of the elementary video streams to be decoded. In other cases, virtual instances of video decoders can be associated with the portions of elementary streams required to be decoded. With this standard, the resource requirements of a platform running multiple virtual video decoder instances can be further optimized by considering the specific decoded video regions that are to be actually presented to the users rather than considering only the number of video elementary streams in use.

Research aspects: It seems that visual compression and systems standards enabling immersive media applications and services are becoming mature. However, the Quality of Experience (QoE) of such applications and services is still in its infancy. The QUALINET White Paper on Definitions of Immersive Media Experience (IMEx) provides a survey of definitions of immersion and presence which leads to a definition of Immersive Media Experience (IMEx). Consequently, the next step is working towards QoE metrics in this domain that requires subjective quality assessments imposing various challenges during the current COVID-19 pandemic.

Versatile Video Coding (VVC) updates

The third round of verification testing for Versatile Video Coding (VVC) has been completed. This includes the testing of High Dynamic Range (HDR) content of 4K ultra-high-definition (UHD) resolution using the Hybrid Log-Gamma (HLG) and Perceptual Quantization (PQ) video formats. The test was conducted using state-of-the-art high-quality consumer displays, emulating an internet streaming-type scenario.

On average, VVC showed on average approximately 50% bit rate reduction compared to High Efficiency Video Coding (HEVC).

Additionally, the ISO/IEC 23008-12 Image File Format has been amended to support images coded using Versatile Video Coding (VVC) and Essential Video Coding (EVC).

Research aspects: The results of the verification tests are usually publicly available and can be used as a baseline for future improvements of the respective standards including the evaluation thereof. For example, the tradeoff compression efficiency vs. encoding runtime (time complexity) for live and video on-demand scenarios is always an interesting research aspect.

The latest MPEG-DASH Update

Finally, I’d like to provide a brief update on MPEG-DASH! At the 135th MPEG meeting, MPEG Systems issued a draft amendment to the core MPEG-DASH specification (i.e., ISO/IEC 23009-1) that provides further improvements of Preroll which is renamed to Preperiod and it will be further discussed during the Ad-hoc Group (AhG) period (please join the dash email list for further details/announcements). Additionally, this amendment includes some minor improvements for nonlinear playback. The so-called Technologies under Consideration (TuC) document comprises new proposals that did not yet reach consensus for promotion to any official standards documents (e.g., amendments to existing DASH standards or new parts). Currently, proposals for minimizing initial delay are discussed among others. Finally, libdash has been updated to support the MPEG-DASH schema according to the 5th edition.

An updated overview of DASH standards/features can be found in the Figure below.

MPEG-DASH status of July 2021.

Research aspects: The informative aspects of MPEG-DASH such as the adaptive bitrate (ABR) algorithms have been subject to research for many years. New editions of the standard mostly introduced incremental improvements but disruptive ideas rarely reached the surface. Perhaps it's time to take a step back and re-think how streaming should work for todays and future media applications and services.

The 136th MPEG meeting will be again an online meeting in October 2021 but MPEG is aiming to meet in-person again in January 2021 (if possible). Click here for more information about MPEG meetings and their developments.