Wednesday, July 28, 2021

Efficient Multi-Encoding Algorithms for HTTP Adaptive Bitrate Streaming

Efficient Multi-Encoding Algorithms for HTTP Adaptive Bitrate Streaming
Picture Coding Symposium (PCS)
29 June-2 July 2021, Bristol, UK

Vignesh V Menon,  Hadi Amirpour, Christian Timmerer, and Mohammad Ghanbari
Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt

Abstract: Since video accounts for the majority of today’s internet traffic, the popularity of HTTP Adaptive Streaming (HAS) is increasing steadily. In HAS, each video is encoded at multiple bitrates and spatial resolutions (i.e., representations) to adapt to a heterogeneity of network conditions, device characteristics, and end-user preferences. Most of the streaming services utilize cloud-based encoding techniques which enable a fully parallel encoding process to speed up the encoding and consequently to reduce the overall time complexity. State-of-the-art approaches further improve the encoding process by utilizing encoder analysis information from already encoded representation(s) to improve the encoding time complexity of the remaining representations. In this paper, we investigate various multi-encoding algorithms (i.e., multi-rate and multi-resolution) and propose novel multi- encoding algorithms for large-scale HTTP Adaptive Streaming deployments. Experimental results demonstrate that the proposed multi-encoding algorithm optimized for the highest compression efficiency reduces the overall encoding time by 39% with a 1.5% bitrate increase compared to stand-alone encodings. Its optimized version for the highest time savings reduces the overall encoding time by 50% with a 2.6% bitrate increase compared to stand-alone encodings.

Keywords: HTTP Adaptive Streaming, HEVC, Multi-rate Encoding, Multi-encoding.

Acknowledgments: The financial support of the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association, is gratefully acknowledged. Christian Doppler Laboratory ATHENA: https://athena.itec.aau.at/.

Wednesday, July 14, 2021

Call for Papers: ViSNext 2021 Workshop at the ACM CoNEXT 2021 Conference

ViSNext’21: 1st ACM CoNEXT Workshop on Design, Deployment, and Evaluation of Network-assisted Video Streaming

In recent years, we have witnessed phenomenal growth in live video traffic over the Internet, accelerated by the rise of novel video streaming technologies, advancements in networking paradigms, and our ability to generate, process, and display videos on heterogeneous devices. Regarding the existing constraints and limitations in different components on the video delivery path from the origin server to clients, the network plays an essential role in boosting the perceived Quality of Experience (QoE) by clients. The ViSNext workshop aims to bring together researchers and developers working on all aspects of video streaming, in particular network-assisted concepts backed up by experimental evidence. We warmly invite submission of original, previously unpublished papers addressing key issues in this area, but not limited to:

  • Design, analysis, and evaluation of network-assisted multimedia system architectures
  • Optimization of edge, fog, and mobile edge computing for video streaming applications
  • Optimization of caching policies/systems for video streaming applications
  • Network-assisted resource allocation for video streaming
  • Experience and lessons learned by deploying large-scale network-assisted video streaming
  • Internet measurement and modeling for enhancing QoE in video streaming applications
  • Design, analysis, and evaluation of network-assisted Adaptive Bitrate (ABR) streaming
  • Network aspects in video streaming: cloud computing, virtualization techniques, network control, and management, including SDN, NFV, and network programmability
  • Routing and traffic engineering in end-to-end video streaming
  • Topics at the intersection of energy-efficient computing and networking for video streaming
  • Network-assisted techniques for low-latency video streaming
  • Machine learning for improving QoE in video streaming applications
  • Machine learning for traffic engineering and congestion control for video streaming
  • Solutions for improving streaming QoE for high-speed user mobility
  • Analysis, modeling, and experimentation of DASH
  • Big data analytics at the network edge to assess viewer experience of adaptive video
  • Reproducible research in adaptive video streaming: datasets, evaluation methods, benchmarking, standardization efforts, open-source tools
  • Novel use cases and applications in the area of adaptive video streaming
  • Advanced network-based techniques for point clouds, light field, and immersive video
  • Low delay and multipath video communication

ViSNext’21 Co-Chairs

  • Farzad Tashtarian, Alpen-Adria-Universität Klagenfurt, Austria
  • Christian Timmerer, Alpen-Adria-Universität Klagenfurt, Austria
  • Halima Elbiaze, Université du Québec à Montréal, Canada
  • Tim Wauters, Ghent University, Belgium
Submission Instruction
  • Solicited submissions include both full technical workshop papers and white paper position papers. The maximum length of such submissions is up to 6 pages (excluding references) in 2-column 10pt ACM format.
  • Papers must include author names and affiliations for single-blind peer reviewing by the program committee. Authors of accepted submissions are expected to present and discuss their work at the workshop. Register and submit your paper here.

Important Dates

  • Paper Submission: Sep 17, 2021
  • Notification of Acceptance: Oct 18, 2021
  • Camera-ready: Oct 25, 2021
  • Workshop Event: Dec 6, 2021

Contact Us

Any questions regarding submission issues should be directed to visnext21@itec.aau.at

Monday, July 5, 2021

IEEE OJ-SP: Fast Multi-Resolution and Multi-Rate Encoding for HTTP Adaptive Streaming Using Machine Learning

Fast Multi-Resolution and Multi-Rate Encoding for HTTP Adaptive Streaming Using Machine Learning
IEEE Open Journal of Signal Processing
[PDF]

Ekrem Çetinkaya, Hadi AmirpourChristian Timmerer, and Mohammad Ghanbari
Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt

Abstract: Video streaming applications keep getting more attention over the years, and HTTP Adaptive Streaming (HAS) became the de-facto solution for video delivery over the Internet. In HAS, each video is encoded at multiple quality levels and resolutions (i.e., representations) to enable adaptation of the streaming session to viewing and network conditions of the client. This requirement brings encoding challenges along with it, e.g., a video source should be encoded efficiently at multiple bitrates and resolutions. Fast multi-rate encoding approaches aim to address this challenge of encoding multiple representations from a single video by re-using information from already encoded representations. In this paper, a convolutional neural network is used to speed up both multi-rate and multi-resolution encoding for HAS. For multi-rate encoding, the lowest bitrate representation is chosen as the reference. For multi-resolution encoding, the highest bitrate from the lowest resolution representation is chosen as the reference. Pixel values from the target resolution and encoding information from the reference representation are used to predict Coding Tree Unit (CTU) split decisions in High-Efficiency Video Coding (HEVC) for dependent representations. Experimental results show that the proposed method for multi-rate encoding can reduce the overall encoding time by 15.08% and parallel encoding time by 41.26%, with a 0.89% bitrate increase compared to the HEVC reference software. Simultaneously, the proposed method for multi-resolution encoding can reduce the encoding time by 46.27% for the overall encoding and 27.71% for the parallel encoding on average with a 2.05% bitrate increase.

Keywords: HTTP Adaptive Streaming, HEVC, Multirate Encoding, Machine Learning

Acknowledgments: The financial support of the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association, is gratefully acknowledged. Christian Doppler Laboratory ATHENA: https://athena.itec.aau.at/.

Thursday, June 10, 2021

IEEE VCIP 2021 Special Sessions

IEEE VCIP 2021 Special Sessions

http://www.vcip2021.org/

Submission of Papers for Regular, Demo, and Special Sessions (extended): June 27, 2021
Paper Acceptance Notification: August 30, 2021

Title: Learning-based Image and Video Coding

Organizers: João Ascenso (Instituto Superior Técnico), Elena Alshina (Huawei)

Description: Image and video coding algorithms create compact representations of an image by exploiting its spatial redundancy and perceptual irrelevance, thus exploiting the characteristics of the human visual system. Recently, data-driven algorithms such as neural networks have attracted a lot of attention and become a popular area of research and development. This interest is driven by several factors, such as recent advances in processing power (cheap and powerful hardware), the availability of large data sets (big data), and several algorithmic and architectural advances (e.g. generative adversarial networks).

Nowadays, neural networks are the state-of-the-art for several computer vision tasks, such as those requiring a high-level understanding of image semantics, e.g. image classification, object segmentation, saliency detection, but also low-level image processing tasks, such as image denoising, inpainting, and super-resolution. These advances have led to an increased interest in applying deep neural networks to image and video coding, which is now the main focus of the JPEG AI and the JVET NN activities within the JPEG and MPEG standardization committees.

The aim of these novel image and video coding solutions is to design a compact representation model that has been obtained (learned) from a large amount of visual data and can efficiently represent the wide variety of images and videos that are consumed today. Some of the available learning-based image coding solutions already show very promising experimental results in terms of rate-distortion (RD) performance, notably in comparison with conventional standard image codecs (especially HEVC Intra and VVC Intra) which code the image data with hand-crafted transforms, entropy coding, and quantization schemes.

This special session on Learning-based Image and Video Coding gathers technical contributions that demonstrate the efficient coding of image and video content based on a learning-based approach. This topic has received many contributions in recent years and is considered critical for the future of both image and video coding, especially solutions adopting end-to-end training as well as for solutions where learning-based tools replace previous conventional tools.

Monday, June 7, 2021

IEEE TNSM: OSCAR: On Optimizing Resource Utilization in Live Video Streaming

 OSCAR: On Optimizing Resource Utilization in Live Video Streaming

[PDF]
DOI: 10.1109/TNSM.2021.3051950

Alireza Erfanian, Farzad Tashtarian, Anatoliy Zabrovskiy, Christian Timmerer, Hermann Hellwagner
Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt

Abstract: Live video streaming traffic and related applications have experienced significant growth in recent years. However, this has been accompanied by some challenging issues, especially in terms of resource utilization. Although IP multicasting can be recognized as an efficient mechanism to cope with these challenges, it suffers from many problems. Applying software-defined networking (SDN) and network function virtualization (NFV) technologies enable researchers to cope with IP multicasting issues in novel ways. In this paper, by leveraging the SDN concept, we introduce OSCAR (Optimizing reSourCe utilizAtion in live video stReaming) as a new cost-aware video streaming approach to provide advanced video coding (AVC)-based live streaming services in the network. In this paper, we use two types of virtualized network functions (VNFs): virtual reverse proxy (VRP) and virtual transcoder function (VTF). At the edge of the network, VRPs are responsible for collecting clients’ requests and sending them to an SDN controller.  Then, by executing a mixed-integer linear program (MILP), the SDN controller determines a group of optimal multicast trees for streaming the requested videos from an appropriate origin server to the VRPs. Moreover, to elevate the efficiency of resource allocation and meet the given end-to-end latency threshold, OSCAR delivers only the highest requested quality from the origin server to an optimal group of VTFs over a multicast tree. The selected VTFs then transcode the received video segments and transmit them to the requesting VRPs in a multicast fashion. To mitigate the time complexity of the proposed MILP model, we present a simple and efficient heuristic algorithm that determines a near-optimal solution in polynomial time. Using the MiniNet emulator, we evaluate the performance of OSCAR in various scenarios. The results show that OSCAR surpasses other SVC- and AVC-based multicast and unicast approaches in terms of cost and resource utilization.

Keywords: Dynamic Adaptive Streaming over HTTP (DASH), Live Video Streaming, Software Defined Networking (SDN), Video Transcoding, Network Function Virtualization (NFV).

Acknowledgments: The financial support of the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association, is gratefully acknowledged. Christian Doppler Laboratory ATHENA: https://athena.itec.aau.at/.

Wednesday, June 2, 2021

ISM’20: Dynamic Segment Repackaging at the Edge for HTTP Adaptive Streaming

Dynamic Segment Repackaging at the Edge for HTTP Adaptive Streaming

IEEE International Symposium on Multimedia (ISM)
2-4 December 2020, Naples, Italy

Jesús Aguilar Armijo, Babak Taraghi, Christian Timmerer, and Hermann Hellwagner
Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt

Abstract: Adaptive video streaming systems typically support different media delivery formats, e.g., MPEG-DASH and HLS, replicating the same content multiple times into the network. Such a diversified system results in inefficient use of storage, caching, and bandwidth resources. The Common Media Application Format (CMAF) emerges to simplify HTTP Adaptive Streaming (HAS), providing a single encoding and packaging format of segmented media content and offering the opportunities of bandwidth savings, more cache hits, and less storage needed. However, CMAF is not yet supported by most devices. To solve this issue, we present a solution where we maintain the main advantages of CMAF while supporting heterogeneous devices using different media delivery formats. For that purpose, we propose to dynamically convert the content from CMAF to the desired media delivery format at an edge node. We study the bandwidth savings with our proposed approach using an analytical model and simulation, resulting in bandwidth savings of up to 20% with different media delivery format distributions.
We analyze the runtime impact of the required operations on the segmented content performed in two scenarios: the classic one, with four different media delivery formats, and the proposed scenario, using CMAF-only delivery through the network. We compare both scenarios with different edge compute power assumptions. Finally, we perform experiments in a real video streaming testbed delivering MPEG-DASH using CMAF content to serve a DASH and an HLS client, performing the media conversion for the latter one.

Keywords: CMAF, Edge Computing, HTTP Adaptive Streaming (HAS)

Acknowledgments: The financial support of the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association, is gratefully acknowledged. Christian Doppler Laboratory ATHENA: https://athena.itec.aau.at/.

Thursday, May 27, 2021

IEEE Communication Magazine: From Capturing to Rendering: Volumetric Media Delivery With Six Degrees of Freedom

From Capturing to Rendering: Volumetric Media Delivery With Six Degrees of Freedom

Teaser: “Help me, Obi-Wan Kenobi. You’re my only hope,” said the hologram of Princess Leia in Star Wars: Episode IV – A New Hope (1977). This was the first time in cinematic history that the concept of holographic-type communication was illustrated. Almost five decades later, technological advancements are quickly moving this type of communication from science fiction to reality.

IEEE Communication Magazine

[PDF]

Jeroen van der Hooft (Ghent University), Maria Torres Vega (Ghent University), Tim Wauters (Ghent University), Christian Timmerer (Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt), Ali C. Begen (Ozyegin University, Networked Media), Filip De Turck (Ghent University), and Raimund Schatz (AIT Austrian Institute of Technology)

Abstract: Technological improvements are rapidly advancing holographic-type content distribution. Significant research efforts have been made to meet the low-latency and high-bandwidth requirements set forward by interactive applications such as remote surgery and virtual reality. Recent research made six degrees of freedom (6DoF) for immersive media possible, where users may both move their heads and change their position within a scene. In this article, we present the status and challenges of 6DoF applications based on volumetric media, focusing on the key aspects required to deliver such services. Furthermore, we present results from a subjective study to highlight relevant directions for future research.

Acknowledgments: The financial support of the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association, is gratefully acknowledged. Christian Doppler Laboratory ATHENA: https://athena.itec.aau.at/.