Monday, November 8, 2021

ACM MMSys 2022: Call for Papers: "Enabling New Horizons for the Media Ecosystem"

ACM MMSys 2022: Call for Papers

"Enabling New Horizons for the Media Ecosystem"



The 13th ACM Multimedia Systems Conference (and associated workshops: MMVE 2022, NOSSDAV 2022, GameSys 2022) will be held from 14th to 17th June 2022 in Athlone, Ireland. MMSys2022 will provide a warm welcome to leading experts from academia and industry to present and share their latest research findings in multimedia systems. While research about specific aspects of multimedia systems are regularly published in the various proceedings and transactions of the networking, operating systems, real-time systems, databases, mobile computing, distributed systems, computer vision, and middleware communities, MMSys aims to cut across these domains in the context of multimedia data types. This provides a unique opportunity to investigate the intersections and the interplay of the various approaches and solutions developed across these domains.

MMSys is a venue for researchers who explore:

  • Complete multimedia systems that provide a new kind of multimedia experience or system whose overall performance improves the state-of-the-art through new research results in more than one component
  • Enhancements to one or more system components and/or aspects that provide improvements over the state-of-the-art traditional and next-generation media services
  • Evaluation studies, models and/or methodologies that provide added value to the multimedia systems community

This touches aspects of many traditional and emerging topics including but not limited to: 

  • Content preparation and (adaptive) delivery systems
  • high dynamic range (HDR)
  • Games, virtual/augmented/mixed/extended reality
  • 3D video
  • Immersive media systems
  • Plenoptics
  • 360-degree video
  • Network virtualisation
  • AI Driven (or powered) Multimedia Systems
  • Multimedia and the Internet of Things (IoT)
  • GPGPUs
  • Mobile multimedia and 5G/6G
  • Wearable multimedia
  • Peer-to-Peer (P2P) or hybrid systems
  • Cloud-based multimedia
  • Digital twins
  • Cyber-physical systems
  • Multi-sensory experiences
  • Smart cities
  • Autonomous multimedia systems
  • Quality of experience (QoE)
  • Machine & Deep learning and statistical modeling for media processing and distribution
  • Volumetric media: from capture to consumption. 

IMPORTANT DATES (Research Track):

  • Full Paper Submission: January 21st 2022
  • Acceptance Notification: March 2nd, 2022
  • Camera Ready Deadline: April 1st, 2022
  • Conference: June 14th-17th, 2022

Submission Instructions

Online submission: https://mmsys2022research.hotcrp.com/ (Will be activated soon!)

Papers must be up to 12 pages long (in PDF format) prepared in the ACM style and written in English. MMSys papers enable authors to present entire multimedia systems or research work that builds on considerable amounts of earlier work in a self-contained manner. MMSys papers are published in the ACM Digital Library. The papers are double-blind reviewed. All submissions will be peer-reviewed by at least three TPC members. All papers will be evaluated for their scientific quality. Authors will have a chance to submit their rebuttals before online discussions among the TPC members. ACM SIGMM has a tradition of publishing open datasets (MMSys) and open-source projects (ACM Multimedia). MMSys 2022 will continue to support scientific reproducibility, by implementing the ACM reproducibility badge system. All accepted papers will be contacted by the Reproducibility Chairs, inviting the authors to make their dataset and code available, and thus, obtaining an ACM badge (visible at the ACM DL). The additional material will be published as Appendixes, with no effect on the final page count for papers.

TECHNICAL SPONSOR: ACM SIGMM

General Chairs

  • Niall Murray (Technological University of the Shannon, Ireland)
  • Mylene Farais (University of Brasilia, Brazil)

Technical Programme Chairs

  • Mario Montagud (University of Valencia and i2CAT Foundation, Spain)
  • Irene Viola (Centrum Wiskunde & Informatica, Netherlands)

Tuesday, October 19, 2021

Understanding Quality of Experience of Heuristic-based HTTP Adaptive Bitrate Algorithms

Understanding Quality of Experience of Heuristic-based HTTP Adaptive Bitrate Algorithms

NOSSDAV’21: The 31st edition of the Workshop on Network and Operating System Support for Digital Audio and Video
Sept. 28-Oct. 1, 2021, Istanbul, Turkey
Conference Website

Babak Taraghi*, Abdelhak Bentaleb**, Christian Timmerer*, Roger Zimmermann** and Hermann Hellwagner*
* Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt
** National University of Singapore

Abstract: Adaptive BitRate (ABR) algorithms play a crucial role in delivering the highest possible viewer’s Quality of Experience (QoE) in HTTP Adaptive Streaming (HAS). Online video streaming service providers use HAS – the dominant video streaming technique on the Internet – to deliver the best QoE for their users. Viewer’s delightfulness relies heavily on how the ABR of a media player can adapt the stream’s quality to the current network conditions. QoE for end-to-end video streaming sessions has been evaluated in many research projects to give better insight into the quality metrics. Objective evaluation models such as ITU Telecommunication Standardization Sector (ITU-T) P.1203 allow for the calculation of Mean Opinion Score (MOS) by considering various QoE metrics, and subjective evaluation is the best assessment approach in investigating the end-user opinion over a video streaming session’s experienced quality. We have conducted subjective evaluations with crowdsourced participants and evaluated the MOS of the sessions using the ITU-T P.1203 quality model. This paper’s main contribution is subjective evaluation analogy with objective evaluation for well-known heuristic-based ABRs.

Keywords: HTTP Adaptive Streaming, ABR Algorithms, Quality of Experience, Crowdsourcing, Subjective Evaluation, Objective Evaluation, MOS, (ITU-T) P.1203

Acknowledgements: The financial support of the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association, is gratefully acknowledged. Christian Doppler Laboratory ATHENA: https://athena.itec.aau.at/.

Sunday, October 17, 2021

ES-HAS: An Edge- and SDN-Assisted Framework for HTTP Adaptive Video Streaming

ES-HAS: An Edge- and SDN-Assisted Framework for HTTP Adaptive Video Streaming

NOSSDAV’21: The 31st edition of the Workshop on Network and Operating System Support for Digital Audio and Video
Sept. 28-Oct. 1, 2021, Istanbul, Turkey
Conference Website
[PDF][Slides][Video]

Reza FarahaniFarzad TashtarianAlireza ErfanianChristian Timmerer, Mohammad Ghanbari and Hermann Hellwagner
Christian Doppler Laboratory ATHENA, 
Alpen-Adria-Universität Klagenfurt

Abstract: Recently, HTTP Adaptive Streaming (HAS) has become the dominant video delivery technology over the Internet. In HAS, clients have full control over the media streaming and adaptation processes. Lack of coordination among the clients and lack of awareness of the network conditions may lead to sub-optimal user experience, and resource utilization in a pure client-based HAS adaptation scheme. Software-Defined Networking (SDN) has recently been considered to enhance the video streaming process. In this paper, we leverage the capability of SDN and Network Function Virtualization (NFV) to introduce an edge- and SDN-assisted video streaming framework called ES-HAS. We employ virtualized edge components to collect HAS clients’ requests and retrieve networking information in a time-slotted manner. These components then perform an optimization model in a time-slotted manner to efficiently serve clients’ requests by selecting an optimal cache server (with the shortest fetch time). In case of a cache miss, a client’s request is served (i) by an optimal replacement quality (only better quality levels with minimum deviation) from a cache server, or (ii) by the originally requested quality level from the origin server. This approach is validated through experiments on a large-scale testbed, and the performance of our framework is compared to pure client-based strategies and the SABR system [11]. Although SABR and ES-HAS show (almost) identical performance in the number of quality switches, ES-HAS outperforms SABR in terms of playback bitrate and the number of stalls by at least 70% and 40%, respectively.

Keywords: Dynamic Adaptive Streaming over HTTP (DASH), Edge Computing, Network-Assisted Video Streaming, Quality of Experience (QoE), Software Defined Networking (SDN), Network Function Virtualization (NFV)

Acknowledgements: The financial support of the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association, is gratefully acknowledged. Christian Doppler Laboratory ATHENA: https://athena.itec.aau.at/.

Tuesday, October 12, 2021

ACM Mile High Video (MHV) 2022

 ACM Mile High Video (MHV) 2022

March 1-3, 2022, Denver, CO

Deadline: Oct 22, 2021 (final)

After running as an independent event for several years, starting with 2022, Mile High Video (MHV) will be organized by the ACM Special Interest Group on Multimedia (SIGMM) to grow further. ACM MHV’22 will establish a unique forum for participants from both industry and academia to present, share and discuss innovations from content production to consumption.

ACM MHV’22 welcomes contributions from industry to share real-world problems and solutions as well as novel approaches and results from basic research typically conducted within an academic environment. ACM MHV’22 will provide a unique opportunity to view the interplay of the industry and academia in the area of video technologies.

ACM MHV contributions are solicited in, but not limited to the following areas:
• Content production, encoding and packaging
• Encoding for broadcast, mobile and OTT, and using AI/ML in encoding
• New and developing audio and video codecs
• HDR, accessibility
• Quality assessment models and tools, and user experience studies
• Workflows
• Virtualized headends, cloud-based workflows for production and distribution
• Redundancy and resilience in content origination
• Ingest protocols
• Ad insertion
• Content delivery and security
• Developments in transport protocols and new delivery paradigms
• Protection for OTT distribution and tools against piracy
• Analytics
• Streaming technologies
• Adaptive streaming and transcoding
• Low latency
• Player, playback and UX developments
• Content discovery, promotion and recommendation systems
• Protocol and Web API improvements and innovations for streaming video
• Industry trends
• Advances in interactive and immersive (xR) video
• Video coding for machines
• Cloud gaming and gaming streaming
• Provenance, content authentication and deepfakes
• Standards and interoperability
• New and developing standards in the media and delivery space
• Interoperability guidelines

Prospective speakers are invited to submit an abstract (i.e., approx. 400 words or up to one page using the ACM template) that will be peer-reviewed by the ACM MHV technical program committee (TPC) for relevance, timeliness and technical correctness.

The authors of the accepted abstracts will be invited to optionally submit a full-length paper (up to six pages + references) for possible inclusion into the conference proceedings. These papers must be original work (i.e., not published previously in a journal or conference) and will also be peer-reviewed by the ACM MHV TPC.

Accepted abstracts and full-length papers will be presented at the ACM MHV conference and will be published in the conference proceedings in the ACM Digital Library.

All prospective ACM authors are subject to all ACM Publications Policies, including ACM's new Publications Policy on Research Involving Human Participants and Subjects.

How to Submit an Abstract

Prospective authors are invited to submit an abstract here: https://mhv22.hotcrp.com/

Important Dates
• Abstract submission deadline: Oct. 22, 2021 (final)
• Notification of abstract acceptance: Nov. 15, 2021
• (Optional) Full-length paper submission deadline: Nov. 30, 2021
• Notification of full-length paper acceptance: Dec. 31, 2021
• Camera-ready submission (abstracts/full-length papers) deadline: Jan. 31, 2022

ACM MHV’22 Program Chairs
• Christian Timmerer (AAU; christian.timmerer AT aau.at)
• Dan Grois (Comcast; dgrois AT acm.org)

ACM MHV'22 Program Committee Members
• Florence Agboma (Sky, UK)
• Saba Ahsan (Nokia, Finland)
• Ali C. Begen (Ozyegin University, Turkey)
• Imed Bouazizi (Qualcomm, USA)
• Alan Bovik (University of Texas at Austin, USA)
• Pablo Cesar (CWI, The Netherlands)
• Pankaj Chaudhari (Hulu, USA)
• Luca De Cicco (Politecnico di Bari, Italy)
• Jan De Cock (Synamedia, Belgium)
• Thomas Edwards (Amazon Web Services, USA)
• Christian Feldmann (Bitmovin, Germany)
• Simone Ferlin-Reiter (Ericsson, Sweden)
• Carsten Griwodz (University of Oslo, Norway)
• Sally Hattori (Disney, USA)
• Carys Hughes (Sky, UK)
• Mourad Kioumgi (Sky, Germany)
• Will Law (Akamai, USA)
• Zhu Li (University of Missouri, Kansas City, USA)
• Zhi Li (Netflix, USA)
• John Luther (JW Player, USA)
• Maria Martini (Kingston University, UK)
• Rufael Mekuria (Unified Streaming, The Netherlands)
• Marta Mrak (BBC, UK)
• Matteo Naccari (Audinate, UK)
• Mark Nakano (WarnerMedia, USA)
• Sejin Oh (Dolby, USA)
• Mickael Raulet (ATEME, France)
• Christian Rothenberg (University of Campinas , Brazil)
• Lucile Sassatelli (Universite Cote d'Azur, France)
• Tamar Shoham (Beamr, Israel)
• Gwendal Simon (Synamedia, France)
• Lea Skorin-Kapov (University of Zagreb, Croatia)
• Michael Stattmann (castLabs, Germany)
• Nicolas Weil (Amazon Web Services, USA)
• Roger Zimmermann (NUS, Singapore)

ACM MHV Steering Committee Members
• Balu Adsumilli (YouTube, USA)
• Ali C. Begen (Ozyegin University, Turkey), Co-chair
• Alex Giladi (Comcast, USA), Co-chair
• Sally Hattori (Walt Disney Studios, USA)
• Jean-Baptiste Kempf (VideoLAN, France)
• Thomas Kernen (NVIDIA, Switzerland)
• Scott Labrozzi (Disney Streaming Services, USA)
• Maria Martini (Kingston University, UK)
• Hatice Memiguven (beIN Media, Turkey)
• Ben Mesander (Wowza Media Systems, USA)
• Mark Nakano (WarnerMedia, USA)
• Madeleine Noland (ATSC, USA)
• Yuriy Reznik (Brightcove, USA)
• Tamar Shoham (Beamr, Israel)

Friday, September 24, 2021

INTENSE: In-depth Studies on Stall Events and Quality Switches and Their Impact on the Quality of Experience in HTTP Adaptive Streaming

INTENSE: In-depth Studies on Stall Events and Quality Switches and Their Impact on the Quality of Experience in HTTP Adaptive Streaming

[PDF]

Babak Taraghi, Minh Nguyen, Hadi Amirpour, Christian Timmerer
Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt

Abstract: With the recent growth of multimedia traffic over the Internet and emerging multimedia streaming service providers, improving Quality of Experience (QoE) for HTTP Adaptive Streaming (HAS) becomes more important. Alongside other factors, such as the media quality, HAS relies on the performance of the media player’s Adaptive Bitrate (ABR) algorithm to optimize QoE in multimedia streaming sessions. QoE in HAS suffers from weak or unstable internet connections and suboptimal ABR decisions. As a result of imperfect adaptiveness to the characteristics and conditions of the internet connection, stall events and quality level switches could occur and with different durations that negatively affect the QoE. In this paper, we address various identified open issues related to the QoE for HAS, notably (i) the minimum noticeable duration for stall events in HAS; (ii) the correlation between the media quality and the impact of stall events on QoE; (iii) the end-user preference regarding multiple shorter stall events versus a single longer stall event; and (iv) the end-user preference of media quality switches over stall events. Therefore, we have studied these open issues from both objective and subjective evaluation perspectives and presented the correlation between the two types of evaluations. The findings documented in this paper can be used as a baseline for improving ABR algorithms and policies in HAS.

Keywords: Crowdsourcing; HTTP Adaptive Streaming; Quality of Experience; Quality Switches; Stall Events; Subjective Evaluation; Objective Evaluation.

Acknowledgements: The financial support of the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association, is gratefully acknowledged. Christian Doppler Laboratory ATHENA: https://athena.itec.aau.at/.

Wednesday, September 22, 2021

LwTE: Light-weight Transcoding at the Edge

 LwTE: Light-weight Transcoding at the Edge

IEEE ACCESS

[PDF]

Alireza Erfanian*, Hadi Amirpour*Farzad TashtarianChristian Timmerer, Hermann Hellwagner
Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt

*These authors contributed equally to this work.

Abstract: Due to the growing demand for video streaming services, providers have to deal with increasing resource requirements for increasingly heterogeneous environments. To mitigate this problem, many works have been proposed which aim to (i) improve cloud/edge caching efficiency, (ii) use computation power available in the cloud/edge for on-the-fly transcoding, and (iii) optimize the trade-off among various cost parameters,e.g., storage, computation, and bandwidth. In this paper, we propose LwTE, a novel Light-weight Transcoding approach at the Edge, in the context of HTTP Adaptive Streaming (HAS). During the encoding process of a video segment at the origin side, computationally intense search processes are going on. The main idea of LwTE is to store the optimal results of these search processes as metadata for each video bitrate and reuse them at the edge servers to reduce the required time and computational resources for on-the-fly transcoding. LwTE enables us to store only the highest bitrate plus corresponding metadata (of very small size) for unpopular video segments/bitrates. In this way, in addition to the significant reduction in bandwidth and storage consumption, the required time for on-the-fly transcoding of a requested segment is remarkably decreased by utilizing its corresponding metadata; unnecessary search processes are avoided. Popular video segments/bitrates are being stored. We investigate our approach for Video-on-Demand (VoD) streaming services by optimizing storage and computation (transcoding) costs at the edge servers and then compare it to conventional methods (store all bitrates, partial transcoding). The results indicate that our approach reduces the transcoding time by at least 80% and decreases the aforementioned costs by 12% to 70% compared to the state-of-the-art approaches.

Keywords: Video streaming, transcoding, video on demand, edge computing.

Acknowledgements: The financial support of the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association, is gratefully acknowledged. Christian Doppler Laboratory ATHENA: https://athena.itec.aau.at/.

Monday, September 6, 2021

CTU Depth Decision Algorithms for HEVC: A Survey

CTU Depth Decision Algorithms for HEVC: A Survey

[PDF]

Ekrem Çetinkaya*, Hadi Amirpour*Mohammad Ghanbari,  and Christian Timmerer
Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt

*These authors contributed equally to this work.

Abstract: High Efficiency Video Coding (HEVC) surpasses its predecessors in encoding efficiency by introducing new coding tools at the cost of an increased encoding time-complexity. The Coding Tree Unit (CTU) is the main building block used in HEVC. In the HEVC standard, frames are divided into CTUs with the predetermined size of up to 64 × 64 pixels. Each CTU is then divided recursively into a number of equally sized square areas, known as Coding Units (CUs). Although this diversity of frame partitioning increases encoding efficiency, it also causes an increase in the time complexity due to the increased number of ways to find the optimal partitioning. To address this complexity, numerous algorithms have been proposed to eliminate unnecessary searches during partitioning CTUs by exploiting the correlation in the video. In this paper, existing CTU depth decision algorithms for HEVC are surveyed. These algorithms are categorized into two groups, namely statistics and machine learning approaches. Statistics approaches are further subdivided into neighboring and inherent approaches. Neighboring approaches exploit the similarity between adjacent CTUs to limit the depth range of the current CTU, while inherent approaches use only the available information within the current CTU. Machine learning approaches try to extract and exploit similarities implicitly. Traditional methods like support vector machines or random forests use manually selected features, while recently proposed deep learning methods extract features during training. Finally, this paper discusses extending these methods to more recent video coding formats such as Versatile Video Coding (VVC) and AOMedia Video 1 (AV1).

Keywords: HEVC, Coding Tree Unit, Complexity, CTU Partitioning, Statistics, Machine Learning

Acknowledgements: The financial support of the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association, is gratefully acknowledged. Christian Doppler Laboratory ATHENA: https://athena.itec.aau.at/.