Friday, May 7, 2021

MMSys’20: Cloud-based Adaptive Video Streaming Evaluation Framework for the Automated Testing of Media Players (CAdViSE)

CAdViSE: Cloud-based Adaptive Video Streaming Evaluation Framework for the Automated Testing of Media Players

ACM Multimedia Systems Conference 2020 (MMSys 2020)
https://2020.acmmmsys.org/
Babak Taraghi (Alpen-Adria-Universität Klagenfurt), Anatoliy Zabrovskiy (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt) and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt)

Abstract: Attempting to cope with fluctuations of network conditions in terms of available bandwidth, latency and packet loss, and to deliver the highest quality of video (and audio) content to users, research on adaptive video streaming has attracted intense efforts from the research community and huge investments from technology giants. How successful these efforts and investments are, is a question that needs precise measurements of the results of those technological advancements. HTTP-based Adaptive Streaming (HAS) algorithms, which seek to improve video streaming over the Internet, introduce video bitrate adaptivity in a way that is scalable and efficient. However, how each HAS implementation takes into account the wide spectrum of variables and configuration options, brings a high complexity to the task of measuring the results and visualizing the statistics of the performance and quality of experience. In this paper, we introduce CAdViSE, our Cloud-based Adaptive Video Streaming Evaluation framework for the automated testing of adaptive media players. The paper aims to demonstrate a test environment which can be instantiated in a cloud infrastructure, examines multiple media players with different network attributes at defined points of the experiment time, and finally concludes the evaluation with visualized statistics and insights into the results.

Keywords: HTTP Adaptive Streaming, Media Players, MPEG-DASH, Network Emulation, Automated Testing, Quality of Experience

Acknowledgments: The financial support of the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association, is gratefully acknowledged. Christian Doppler Laboratory ATHENA: https://athena.itec.aau.at/.

ACM Reference Format:
Babak Taraghi, Anatoliy Zabrovskiy, Christian Timmerer, and Hermann Hellwagner. 2020. CAdViSE: Cloud-based Adaptive Video Streaming Evaluation Framework for the Automated Testing of Media Players. In 11th ACM Multimedia Systems Conference (MMSys’20), June 8–11, 2020, Istanbul, Turkey. , 4 pages. https://doi.org/10.1145/3339825.3393581

Thursday, May 6, 2021

ACM TOMM: Performance Analysis of ACTE: a Bandwidth Prediction Method for Low-Latency Chunked Streaming

Performance Analysis of ACTE: a Bandwidth Prediction Method for Low-Latency Chunked Streaming

ACM Transactions on Multimedia Computing, Communications, and Applications

[PDF]

Abdelhak Bentaleb (National University of Singapore), Christian Timmerer (Alpen-Adria-Universität Klagenfurt), Ali C. Begen (Ozyegin University, Networked Media), Roger Zimmermann (National University of Singapore)

Abstract: HTTP adaptive streaming with chunked transfer encoding can offer low-latency streaming without sacrificing the coding efficiency. This allows media segments to be delivered while still being packaged. However, conventional schemes often make widely inaccurate bandwidth measurements due to the presence of idle periods between the chunks, and hence this is causing sub-optimal adaptation decisions. To address this issue, we earlier proposed ACTE (ABR for Chunked Transfer Encoding), a bandwidth prediction scheme for low-latency chunked streaming. While ACTE was a significant step forward, in this study we focus on two still remaining open areas, namely (i) quantifying the impact of encoding parameters, including chunk and segment durations, bitrate levels, minimum interval between IDR-frames and frame rate on ACTE, and (ii) exploring the impact of video content complexity on ACTE. We thoroughly investigate these questions and report on our findings. We also discuss some additional issues that arise in the context of pursuing very low latency HTTP video streaming.

Keywords: HAS; ABR; DASH; CMAF; low-latency; HTTP chunked transfer encoding; bandwidth measurement and prediction; RLS; encoding parameters; FFmpeg

Acknowledgments: The financial support of the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association, is gratefully acknowledged. Christian Doppler Laboratory ATHENA: https://athena.itec.aau.at/.


Wednesday, May 5, 2021

ICME’20: Multi-Period Per-Scene Optimization for HTTP Adaptive Streaming

 Multi-Period Per-Scene Optimization for HTTP Adaptive Streaming

IEEE International Conference on Multimedia and Expo
July 06 – 10, London, United Kingdom
https://www.2020.ieeeicme.org/

[PDF][Slides][Video]

Venkata Phani Kumar M (Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Alpen-Adria-Universität Klagenfurt) and Hermann Hellwagner  (Alpen-Adria-Universität Klagenfurt)

Abstract: Video delivery over the Internet has become more and more established in recent years due to the widespread use of Dynamic Adaptive Streaming over HTTP (DASH). The current DASH specification defines a hierarchical data model for Media Presentation Descriptions (MPDs) in terms of periods, adaptation sets, representations, and segments. Although multi-period MPDs are widely used in live streaming scenarios, they are not fully utilized in Video-on-Demand (VoD) HTTP adaptive streaming (HAS) scenarios. In this paper, we introduce MiPSO, a framework for MultiPeriod per-Scene Optimization, to examine multiple periods in VoD HAS scenarios. MiPSO provides different encoded representations of a video at either (i) maximum possible quality or (ii) minimum possible bitrate, beneficial to both service providers and subscribers. In each period, the proposed framework adjusts the video representations (resolution-bitrate pairs) by taking into account the complexities of the video content, with the aim of achieving streams at either higher qualities or lower bitrates. The experimental evaluation with a test video data set shows that the MiPSO reduces the average bitrate of streams with the same visual quality by approximately 10% or increases the visual quality of streams by at least 1 dB in terms of Peak Signal-to-Noise (PSNR) at the same bitrate compared to conventional approaches to video content delivery.

Keywords: Adaptive Streaming, Video-on-Demand, Per-Scene Encoding, Media Presentation Description

Acknowledgments: The financial support of the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association, is gratefully acknowledged. Christian Doppler Laboratory ATHENA: https://athena.itec.aau.at/.


Wednesday, March 31, 2021

MPEG news: a report from the 133rd meeting (virtual)

The original blog post can be found at the Bitmovin Techblog and has been modified/updated to focus on and highlight research aspects. Additionally, this version of the blog post will also be posted at ACM SIGMM Records.

MPEG Systems File Format Subgroup wins Technology & Engineering Emmy® Award

The 133rd MPEG meeting was once again held as an online meeting, and this time, kicked off with great news, that MPEG is one of the organizations honored as a 72nd Annual Technology & Engineering Emmy® Awards Recipient, specifically the MPEG Systems File Format Subgroup and its ISO Base Media File Format (ISOBMFF) et al.

The official press release can be found here and comprises the following items:
  • 6th Emmy® Award for MPEG Technology: MPEG Systems File Format Subgroup wins Technology & Engineering Emmy® Award
  • Essential Video Coding (EVC) verification test finalized
  • MPEG issues a Call for Evidence on Video Coding for Machines
  • Neural Network Compression for Multimedia Applications – MPEG calls for technologies for incremental coding of neural networks
  • MPEG Systems reaches the first milestone for supporting Versatile Video Coding (VVC) and Essential Video Coding (EVC) in the Common Media Application Format (CMAF)
  • MPEG Systems continuously enhances Dynamic Adaptive Streaming over HTTP (DASH)
  • MPEG Systems reached the first milestone to carry event messages in tracks of the ISO Base Media File Format
In this report, I’d like to focus on ISOBMFF, EVC, CMAF, and DASH.

MPEG Systems File Format Subgroup wins Technology & Engineering Emmy® Award

MPEG is pleased to report that the File Format subgroup of MPEG Systems is being recognized this year by the National Academy for Television Arts and Sciences (NATAS) with a Technology & Engineering Emmy® for their 20 years of work on the ISO Base Media File Format (ISOBMFF). This format was first standardized in 1999 as part of the MPEG-4 Systems specification and is now in its 6th edition as ISO/IEC 14496-12. It has been used and adopted by many other specifications, e.g.:
  • MP4 and 3GP file formats;
  • Carriage of NAL unit structured video in the ISO Base Media File Format, which provides support for AVC, HEVC, VVC, EVC, and probably soon LCEVC;
  • MPEG-21 file format;
  • Dynamic Adaptive Streaming over HTTP (DASH) and Common Media Application Format (CMAF);
  • High-Efficiency Image Format (HEIF);
  • Timed text and other visual overlays in ISOBMFF;
  • Common encryption format;
  • Carriage of timed metadata metrics of media;
  • Derived visual tracks;
  • Event message track format;
  • Carriage of uncompressed video;
  • Omnidirectional Media Format (OMAF);
  • Carriage of visual volumetric video-based coding data;
  • Carriage of geometry-based point cloud compression data;
  • … to be continued!
This is MPEG’s fourth Technology & Engineering Emmy® Award (after MPEG-1 and MPEG-2 together with JPEG in 1996, Advanced Video Coding (AVC) in 2008, and MPEG-2 Transport Stream in 2013) and sixth overall Emmy® Award, including the Primetime Engineering Emmy® Awards for Advanced Video Coding (AVC) High Profile in 2008 and High-Efficiency Video Coding (HEVC) in 2017, respectively.

Essential Video Coding (EVC) verification test finalized

At the 133rd MPEG meeting, a verification testing assessment of the Essential Video Coding (EVC) standard was completed. The first part of the EVC verification test using high dynamic range (HDR) and wide color gamut (WCG) was completed at the 132nd MPEG meeting. A subjective quality evaluation was conducted comparing the EVC Main profile to the HEVC Main 10 profile and the EVC Baseline profile to AVC High 10 profile, respectively:
  • Analysis of the subjective test results showed that the average bitrate savings for EVC Main profile are approximately 40% compared to HEVC Main 10 profile, using UHD and HD SDR content encoded in both random access and low delay configurations.
  • The average bitrate savings for the EVC Baseline profile compared to the AVC High 10 profile is approximately 40% using UHD SDR content encoded in the random-access configuration and approximately 35% using HD SDR content encoded in the low delay configuration.
  • Verification test results using HDR content had shown average bitrate savings for EVC Main profile of approximately 35% compared to HEVC Main 10 profile.
By providing significantly improved compression efficiency compared to HEVC and earlier video coding standards while encouraging the timely publication of licensing terms, the MPEG-5 EVC standard is expected to meet the market needs of emerging delivery protocols and networks, such as 5G, enabling the delivery of high-quality video services to an ever-growing audience.

In addition to verification tests, EVC, along with VVC and CMAF were subject to further improvements to their support systems.

Research aspects: as for every new video codec, its compression efficiency and computational complexity are important performance metrics. Additionally, the availability of (efficient) open-source implementations (i.e., x264, x265, soon x266, VVenC, aomenc, et al., etc.) are vital for its adoption in the (academic) research community.

MPEG Systems reaches the first milestone for supporting Versatile Video Coding (VVC) and Essential Video Coding (EVC) in the Common Media Application Format (CMAF)

At the 133rd MPEG meeting, MPEG Systems promoted Amendment 2 of the Common Media Application Format (CMAF) to Committee Draft Amendment (CDAM) status, the first major milestone in the ISO/IEC approval process. This amendment defines:
  • constraints to (i) Versatile Video Coding (VVC) and (ii) Essential Video Coding (EVC) video elementary streams when carried in a CMAF video track;
  • codec parameters to be used for CMAF switching sets with VVC and EVC tracks; and
  • support of the newly introduced MPEG-H 3D Audio profile.
It is expected to reach its final milestone in early 2022. For research aspects related to CMAF, the reader is referred to the next section about DASH.

MPEG Systems continuously enhances Dynamic Adaptive Streaming over HTTP (DASH)

At the 133rd MPEG meeting, MPEG Systems promoted Part 8 of Dynamic Adaptive Streaming over HTTP (DASH), also referred to as “Session-based DASH,” to its final stage of standardization (i.e., Final Draft International Standard (FDIS)).

Historically, in DASH, every client uses the same Media Presentation Description (MPD), as it best serves the service's scalability. However, there have been increasing requests from the industry to enable customized manifests for enabling personalized services. MPEG Systems has standardized a solution to this problem without sacrificing scalability. Session-based DASH adds a mechanism to the MPD to refer to another document, called Session-based Description (SBD), allowing per-session information. The DASH client can use this information (i.e., variables and their values) provided in the SBD to derive the URLs for HTTP GET requests.

An updated overview of DASH standards/features can be found in the Figure below.
MPEG DASH Status as of January 2021.

Research aspects: CMAF is mostly like becoming the main segment format to be used in the context of HTTP adaptive streaming (HAS) and, thus, also DASH (hence also the name common media application format). Supporting a plethora of media coding formats will inevitably result in a multi-codec dilemma that needs to be addressed soon as there will be no flag day where everyone will switch to a new coding format. Thus, designing efficient bitrate ladders for multi-codec delivery will an interesting research aspect, which needs to include device/player support (i.e., some devices/player will support only a subset of available codecs), storage capacity/costs within the cloud as well as within the delivery network, and network distribution capacity/costs (i.e., CDN costs).

The 134th MPEG meeting will be again an online meeting in April 2021. Click here for more information about MPEG meetings and their developments.

Wednesday, February 3, 2021

IEEE VCIP 2021: Call for Special Session Proposals

2021 International Conference on Visual Communications and Image Processing (VCIP)

Munich, Germany, December 5-8, 2021

https://vcip2021.org/

Call for Special Session Proposals [PDF]

The 2021 International Conference on Visual Communications and Image Processing (VCIP), sponsored by the IEEE Circuits and Systems Society, will be held in Munich, Germany, December 5-8, 2021.

As usual, special sessions complement the regular program of VCIP 2021. They are intended to provide a sample of the state-of-the-art and also to highlight important emerging research directions in fields of particular interest to the VCIP participants. The idea is to have a focused effort on a ‘special topic’ rather than a broad focus.

This Call is inviting Special Session Proposals from the visual communications and image processing community according to the requirements defined below.

Requirements

The target for each Special Session is four papers. The following information should be included in the proposal:

  • Title of the proposed special session
  • Names and affiliations of the organizers (including brief bio and contact info)
  • Session abstract (approx. 250 words) including the motivation and significance of the topic, and the rationale for the proposed special session
  • List of invited papers (including a tentative title, author list, and abstract for each paper)
  • Optionally, the proposal should describe how the special session will be organized at VCIP in order to make it truly a special event

In addition to invited papers, other potential authors will be allowed to submit papers to Special Sessions. 

All submitted special session papers shall conform to the format and length requirements of the regular session papers. If a special session has more than 4 papers being accepted, some of the papers will be moved to the regular paper sessions of the conference.

Proposals will be evaluated based on the timeliness of the topic and relevance to VCIP, as well as the track record of the organizers and anticipated quality of papers in the proposed session. Kindly note that all papers in a special session will be peer-reviewed following the regular paper review process to ensure that the contributions are of the highest quality.

To submit a special session proposal (in a single PDF file) or for additional information regarding the special sessions, please contact Special Session Co-Chairs: Fernando Pereira (fp@lx.it.pt) and Christian Timmerer (christian.timmerer@aau.at).

Important Dates

  • Special Session proposal submission deadline: 28 March 2021
  • Special Session proposal acceptance notification: 11 April 2021
  • Special Session paper submission: 8 June 2021
  • Paper acceptance notification: 6 September 2021


Sunday, November 22, 2020

MPEG news: a report from the 132nd meeting (virtual)

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects. Additionally, this version of the blog post will be also posted at ACM SIGMM Records.

The 132nd MPEG meeting was the first meeting with the new structure. That is, ISO/IEC JTC 1/SC 29/WG 11 -- the official name of MPEG under the ISO structure -- was disbanded after the 131st MPEG meeting and some of the subgroups of WG 11 (MPEG) have been elevated to independent MPEG Working Groups (WGs) and Advisory Groups (AGs) of SC 29 rather than subgroups of the former WG 11. Thus, the MPEG community is now an affiliated group of WGs and AGs that will continue meeting together according to previous MPEG meeting practices and will further advance the standardization activities of the MPEG work program.

The 132nd MPEG meeting was the first meeting with the new structure as follows (incl. Convenors and position within WG 11 structure):

  • AG 2 MPEG Technical Coordination (Convenor: Prof. Jörn Ostermann; for overall MPEG work coordination and prev. known as the MPEG chairs meeting; it’s expected that one can also provide inputs to this AG without being a member of this AG)
  • WG 2 MPEG Technical Requirements (Convenor Dr. Igor Curcio; former Requirements subgroup)
  • WG 3 MPEG Systems (Convenor: Dr. Youngkwon Lim; former Systems subgroup)
  • WG 4 MPEG Video Coding (Convenor: Prof. Lu Yu; former Video subgroup)
  • WG 5 MPEG Joint Video Coding Team(s) with ITU-T SG 16 (Convenor: Prof. Jens-Rainer Ohm; former JVET)
  • WG 6 MPEG Audio Coding (Convenor: Dr. Schuyler Quackenbush; former Audio subgroup)
  • WG 7 MPEG Coding of 3D Graphics (Convenor: Prof. Marius Preda, former 3DG subgroup)
  • WG 8 MPEG Genome Coding (Convenor: Prof. Marco Mattaveli; newly established WG)
  • AG 3 MPEG Liaison and Communication (Convenor: Prof. Kyuheon Kim; (former Communications subgroup)
  • AG 5 MPEG Visual Quality Assessment (Convenor: Prof. Mathias Wien; former Test subgroup).

The 132nd MPEG meeting was held as an online meeting and more than 300 participants continued to work efficiently on standards for the future needs of the industry. As a group, MPEG started to explore new application areas that will benefit from standardized compression technology in the future. A new web site has been created and can be found at http://mpeg.org/.

The official press release can be found here and comprises the following items:

  • Versatile Video Coding (VVC) Ultra-HD Verification Test Completed and Conformance and Reference Software Standards Reach their First Milestone
  • MPEG Completes Geometry-based Point Cloud Compression (G-PCC) Standard
  • MPEG Evaluates Extensions and Improvements to MPEG-G and Announces a Call for Evidence on New Advanced Genomics Features and Technologies
  • MPEG Issues Draft Call for Proposals on the Coded Representation of Haptics
  • MPEG Evaluates Responses to MPEG IPR Smart Contracts CfP
  • MPEG Completes Standard on Harmonization of DASH and CMAF
  • MPEG Completes 2nd Edition of the Omnidirectional Media Format (OMAF)
  • MPEG Completes the Low Complexity Enhancement Video Coding (LCEVC) Standard

In this report, I’d like to focus on VVC, G-PCC, DASH/CMAF, OMAF, and LCEVC.

Versatile Video Coding (VVC) Ultra-HD Verification Test Completed and Conformance & Reference Software Standards Reach their First Milestone

MPEG completed a verification testing assessment of the recently ratified Versatile Video Coding (VVC) standard for ultra-high definition (UHD) content with standard dynamic range, as may be used in newer streaming and broadcast television applications. The verification test was performed using rigorous subjective quality assessment methods and showed that VVC provides a compelling gain over its predecessor -- the High Efficiency Video Coding (HEVC) standard produced in 2013. In particular, the verification test was performed using the VVC reference software implementation (VTM) and the recently released open-source encoder implementation of VVC (VVenC):
  • Using its reference software implementation (VTM), VVC showed bit rate savings of roughly 45% over HEVC for comparable subjective video quality.
  • Using VVenC, additional bit rate savings of more than 10% relative to VTM were observed, which at the same time runs significantly faster than the reference software implementation.
Additionally, the standardization work for both conformance testing and reference software for the VVC standard reached its first major milestone, i.e., progressing to the Committee Draft ballot in the ISO/IEC approval process. The conformance testing standard (ISO/IEC 23090-15) will ensure interoperability among the diverse applications that use the VVC standard, and the reference software standard (ISO/IEC 23090-16) will provide an illustration of the capabilities of VVC and a valuable example showing how the standard can be implemented. The reference software will further facilitate the adoption of the standard by being available for use as the basis of product implementations.
Research aspects: as for every new video codec, its compression efficiency and computational complexity are important performance metrics. While the reference software (VTM) provides a valid reference in terms of compression efficiency it is not optimized for runtime. VVenC seems to provide already a significant improvement and with x266 another open source implementation will be available soon. Together with AOMedia's AV1 (including its possible successor AV2) we are looking forward to a lively future in the area of video codecs.

MPEG Completes Geometry-based Point Cloud Compression Standard

MPEG promoted its ISO/IEC 23090-9 Geometry-based Point Cloud Compression (G-PCC) standard to the Final Draft International Standard (FDIS) stage. G-PCC addresses lossless and lossy coding of time-varying 3D point clouds with associated attributes such as color and material properties. This technology is particularly suitable for sparse point clouds. ISO/IEC 23090-5 Video-based Point Cloud Compression (V-PCC), which reached the FDIS stage in July 2020, addresses the same problem but for dense point clouds, by projecting the (typically dense) 3D point clouds onto planes, and then processing the resulting sequences of 2D images using video compression techniques. The generalized approach of G-PCC, where the 3D geometry is directly coded to exploit any redundancy in the point cloud itself, is complementary to V-PCC and particularly useful for sparse point clouds representing large environments.

Point clouds are typically represented by extremely large amounts of data, which is a significant barrier to mass-market applications. However, the relative ease of capturing and rendering spatial information compared to other volumetric video representations makes point clouds increasingly popular for displaying immersive volumetric data. The current draft reference software implementation of a lossless, intra-frame G‐PCC encoder provides a compression ratio of up to 10:1 and lossy coding of acceptable quality for a variety of applications with a ratio of up to 35:1.

By providing high immersion at currently available bit rates, the G‐PCC standard will enable various applications such as 3D mapping, indoor navigation, autonomous driving, advanced augmented reality (AR) with environmental mapping, and cultural heritage.
Research aspects: the main research focus related to G-PCC and V-PCC is currently on compression efficiency but one should not dismiss its delivery aspects including its dynamic, adaptive streaming. A recent paper on this topic has been published in the IEEE Communications Magazine and is entitled "From Capturing to Rendering: Volumetric Media Delivery With Six Degrees of Freedom".

MPEG Finalizes the Harmonization of DASH and CMAF

MPEG successfully completed the harmonization of Dynamic Adaptive Streaming over HTTP (DASH) with Common Media Application Format (CMAF) featuring a DASH profile for the use with CMAF (as part of the 1st Amendment of ISO/IEC 23009-1:2019 4th edition).

CMAF and DASH segments are both based on the ISO Base Media File Format (ISOBMFF), which per se enables smooth integration of both technologies. Most importantly, this DASH profile defines (a) a normative mapping of CMAF structures to DASH structures and (b) how to use Media Presentation Description (MPD) as a manifest format.

Additional tools added to this amendment include
  • DASH events and timed metadata track timing and processing models with in-band event streams,
  • a method for specifying the resynchronization points of segments when the segments have internal structures that allow container-level resynchronization,
  • an MPD patch framework that allows the transmission of partial MPD information as opposed to the complete MPD using the XML patch framework as defined in IETF RFC 5261, and
  • content protection enhancements for efficient signaling.
It is expected that the 5th edition of the MPEG DASH standard (ISO/IEC 23009-1) containing this change will be issued at the 133rd MPEG meeting in January 2021. An overview of DASH standards/features can be found in the Figure below.
Research aspects: one of the features enabled by CMAF is low latency streaming that is actively researched within the multimedia systems community (e.g., here). The main research focus has been related to the ABR logic while its impact on the network is not yet fully understood and requires strong collaboration among stakeholders along the delivery path including ingest, encoding, packaging, (encryption), content delivery network (CDN), and consumption. A holistic view on ABR is needed to enable innovation and the next step towards the future generation of streaming technologies (https://athena.itec.aau.at/). 

MPEG Completes 2nd Edition of the Omnidirectional Media Format

MPEG completed the standardization of the 2nd edition of the Omnidirectional MediA Format (OMAF) by promoting ISO/IEC 23009-2 to Final Draft International Standard (FDIS) status including the following features:
  • “Late binding” technologies to deliver and present only that part of the content that adapts to the dynamically changing users' viewpoint. To enable an efficient implementation of such a feature, this edition of the specification introduces the concept of bitstream rewriting, in which a compliant bitstream is dynamically generated that, by combining the received portions of the bitstream, covers only the users' viewport on the client.
  • Extension of OMAF beyond 360-degree video. This edition introduces the concept of viewpoints, which can be considered as user-switchable camera positions for viewing content or as temporally contiguous parts of a storyline to provide multiple choices for the storyline a user can follow.
  • Enhances the use of video, image, or timed text overlays on top of omnidirectional visual background video or images related to a sphere or a viewport.
Research aspects: standards usually define formats to enable interoperability but various informative aspects are left open for industry competition and subject to research and development. The same holds for OMAF and its 2nd edition enables researchers and developers to work towards efficient viewport-adaptive implementations focusing on the users' viewport.

MPEG Completes the Low Complexity Enhancement Video Coding Standard

MPEG is pleased to announce the completion of the new ISO/IEC 23094-2 standard, i.e., Low Complexity Enhancement Video Coding (MPEG-5 Part 2 LCEVC), which has been promoted to Final Draft International Standard (FDIS) at the 132nd MPEG meeting.
  • LCEVC adds an enhancement data stream that can appreciably improve the resolution and visual quality of reconstructed video with an effective compression efficiency of limited complexity by building on top of existing and future video codecs.
  • LCEVC can be used to complement devices originally designed only for decoding the base layer bitstream, by using firmware, operating system, or browser support. It is designed to be compatible with existing video workflows (e.g., CDNs, metadata management, DRM/CA) and network protocols (e.g., HLS, DASH, CMAF) to facilitate the rapid deployment of enhanced video services.
  • LCEVC can be used to deliver higher video quality in limited bandwidth scenarios, especially when the available bit rate is low for high-resolution video delivery and decoding complexity is a challenge. Typical use cases include mobile streaming and social media, and services that benefit from high-density/low-power transcoding.
Research aspects: LCEVC provides a kind of scalable video coding by combining hardware- and software-based decoders that allows for a certain flexibility as part of regular software life cycle updates. However, LCEVC has been never compared to Scalable Video Coding (SVC) and Scalable High Efficiency Video Coding (SHVC) which could be an interesting aspect for future work.

The 133rd MPEG meeting will be again an online meeting in January 2021.

Click here for more information about MPEG meetings and their developments.

Monday, October 19, 2020

ACM MMSys 2021 Research Track - Call for Papers

ACM MMSys 2021 Research Track - Call for Papers


“Bridging Deep Media and Communications”
Sept. 28 - Oct. 1, 2021, Istanbul, Turkey

Scope and Topics of Interest
The ACM Multimedia Systems Conference (MMSys) provides a forum for researchers to present and share their latest research findings in multimedia systems. While research about specific aspects of multimedia systems are regularly published in the various proceedings and transactions of the networking, operating systems, real-time systems, databases, mobile computing, distributed systems, computer vision, and middleware communities, MMSys aims to cut across these domains in the context of multimedia data types. This provides a unique opportunity to investigate the intersections and the interplay of the various approaches and solutions developed across these domains to deal with multimedia data types.

Such individual system components include:
  • Operating systems
  • Distributed architectures and protocols
  • Domain languages, development tools and abstraction layers
  • Using new architectures or computing resources for multimedia
  • New I/O architectures/devices, innovative uses, and algorithms
  • Representation of continuous or time-dependent media
  • Metrics and measurement tools to assess performance
This touches aspects of many hot topics including content preparation and (adaptive) delivery systems, HDR, games, virtual/augmented/mixed reality, 3D video, immersive systems, plenoptics, 360-degree/volumetric video delivery, multimedia Internet of Things (IoT), multi and many-core, GPGPUs, mobile multimedia and 5G, wearable multimedia, cloud-based multimedia, P2P, cyber-physical systems, multi-sensory experiences, smart cities and QoE.

We encourage submissions in the following Focus Areas:
  • Machine learning and statistical modeling for streaming
  • Volumetric media and collaborative immersive environments
  • Fake media and tools for preventing illegal broadcasts
Important Dates
  • Submission deadline: December 14, 2020 (firm deadline)
  • Acceptance notification: February 19, 2021
  • Camera-ready deadline: April 2, 2021
Important Dates [second round]
    Submission Instructions

    Online submission: https://mmsys2021.hotcrp.com/

    Papers must be up to 12 pages plus optional pages for references only in PDF format prepared in the ACM style and written in English. Papers must be anonymised and not reveal any information about the authors.

    MMSys papers enable authors to present entire multimedia systems or research work that builds on considerable amounts of earlier work in a self-contained manner. All submissions will be peer-reviewed by at least 3 TPC members and will be evaluated for their scientific quality. Authors will have a chance to submit their rebuttals before online discussions among the TPC members. MMSys'21 will also continue to support scientific reproducibility by implementing the ACM reproducibility badge system. Accepted papers will be published in the ACM Digital Library.

    General Chairs
    • Özgü Alay (Simula Metropolitan and University of Oslo, Norway)
    • Cheng-Hsin Hsu (National Tsing Hua University, Taiwan)
    • Ali C. Begen (Ozyegin University and Networked Media, Turkey)
    TPC Chairs
    • Lucile Sassatelli (Universite Cote d'Azur, France)
    • Feng Qian (University of Minnesota, USA)
    Supporters

    ** Several travel grants will be offered **

    Adobe, Ozyegin University, Turkish Airlines, Twitch, YouTube, Comcast, Medianova, MulticoreWare, AMD, Argela, Bigdata Teknoloji, Bitmovin, DASH-IF, Mux, Nokia, Pixery, SSIMWAVE, Streaming Video Alliance, Tencent, Unified Streaming, Ericsson, Interdigital, Sky

    Follow Us
    This call for papers in PDF: https://2021.acmmmsys.org/files/cfp_mmsys21.pdf