Showing posts with label HAS. Show all posts
Showing posts with label HAS. Show all posts

Thursday, September 5, 2019

ACMMM'19: Docker-Based Evaluation Framework for Video Streaming QoE in Broadband Networks

Docker-Based Evaluation Framework for Video Streaming QoE in Broadband Networks
(Demo Paper)


[PDF]

Cise Midoglu (Simula), Anatoliy Zabrovskiy (AAU), Özgü Alay (Simula), Daniel Hölbling-Inzko (Bitmovin), Carsten Griwodz (Univ. of Oslo), Christian Timmerer (AAU/Bitmovin)

Abstract: Video streaming is one of the top traffic contributors in the Internet and a frequent research subject. It is expected that streaming traffic will grow 4-fold for video globally and 9-fold for mobile video between 2017 and 2022. In this paper, we present an automatized measurement framework for evaluating video streaming QoE in operational broadband networks, using headless streaming with a Docker-based client, and a server-side implementation allowing for the use of multiple video players and adaptation algorithms. Our framework allows for integration with the MONROE testbed and Bitmovin Analytics, which bring on the possibility to conduct large-scale measurements in different networks, including mobility scenarios, and monitor different parameters in the application, transport, network, and physical layers in real-time.

Keywords: adaptive streaming, network measurements, OTT video analytics, QoE


Monday, August 12, 2019

ACMMM'19: Towards 6DoF HTTP Adaptive Streaming Through Point Cloud Compression

Towards 6DoF HTTP Adaptive Streaming Through Point Cloud Compression


[PDF]

Jeroen van der Hooft, Tim Wauters, Filip De Turck (Ghent University - imec), Christian Timmerer, and Hermann Hellwagner (Alpen-Adria-Universität Klagenfurt)

Abstract: The increasing popularity of head-mounted devices and 360° video cameras allows content providers to offer virtual reality video streaming over the Internet, using a relevant representation of the immersive content combined with traditional streaming techniques. While this approach allows the user to freely move her head, her location is fixed by the camera’s position within the scene. Recently, an increased interest has been shown for free movement within immersive scenes, referred to as six degrees of freedom. One way to realize this is by capturing objects through a number of cameras positioned in different angles, and creating a point cloud which consists of the location and RGB color of a significant number of points in the three-dimensional space. Although the concept of point clouds has been around for over two decades, it recently received increased attention by ISO/IEC MPEG, issuing a call for proposals for point cloud compression. As a result, dynamic point cloud objects can now be compressed to bit rates in the order of 3 to 55 Mb/s, allowing feasible delivery over today’s mobile networks. In this paper, we propose PCC-DASH, a standards-compliant means for HTTP adaptive streaming of scenes comprising multiple, dynamic point cloud objects. We present a number of rate adaptation heuristics which use information on the user’s position and focus, the available bandwidth, and the client’s buffer status to decide upon the most appropriate quality representation of each object. Through an extensive evaluation, we discuss the advantages and drawbacks of each solution. We argue that the optimal solution depends on the considered scene and camera path, which opens interesting possibilities for future work.

Keywords: HTTP adaptive streaming, MPEG-DASH, immersive video, point clouds, MPEG V-PCC, rate adaptation

Slides:

ACM Reference Format:
Jeroen van der Hooft, Tim Wauters, Filip De Turck, Christian Timmerer, and Hermann Hellwagner. 2019. Towards 6DoF HTTP Adaptive Streaming Through Point Cloud Compression. In ACM Multimedia ’19, October 21–25, 2019, Nice, France. ACM, New York, NY, USA, 9 pages. https://doi.org/10. 1145/1122445