Thursday, July 19, 2018


The Institute of Information Technology at Alpen-Adria-Universität Klagenfurt, Austria, announces the following job vacancy:

(fixed-term employment for the period of 3 years, 30 hours/week) 

at the Faculty of Technical Sciences, Institute of Information Technology (ITEC). The gross salary per month for this position is € 2,112.40 (pre tax, 14 times a year) – i.e., the default salary as defined by the Austrian Science Fund (FWF). Estimated commencement of duties will be the 1st of October, 2018.

Your main duties include conducting scientific research in the context of the OVID (“Relevance Detection in Ophthalmic Surgery Videos”) FWF research project with the goal to publish in international, high-quality conferences and journals. You will work under supervision of experienced researchers at ITEC and in cooperation with other doctoral candidates working on the same project.

More precisely, your duties will be:
  • Independent scientific research in the field of medical video content analysis and machine learning with the ultimate goal to obtain the PhD degree at Alpen-Adria-Universität Klagenfurt 
  • Collaboration with the “Medical Multimedia” research team at ITEC in terms of research and (optional) teaching
  • Participation in supervision of master students 
  • Assistance with project-related administrative tasks within the department and in university committees 
  • Assistance with project-related public relations activities within the institute and faculty 
Your profile:
  • Master or diploma degree of Technical Science in the field of Computer Science, completed at a domestic or foreign university (with good final degrees) 
  • Knowledge and experience in: multimedia content analysis, software engineering, and preferably machine learning 
  • Excellent programming skills, especially in C++, Java, and Python 
  • Fluency in English, both in written and oral form 
All relevant documents for the application (at least a curriculum vitae and the master’s certificate including final grades) have to be submitted via e-mail to Assoc.Prof. DI Dr. Klaus Schöffmann ( no later than the 15th of September, 2018.

Monday, July 2, 2018

Internet-QoE'18 Keynote: HTTP Adaptive Streaming - State of the Art and Challenges Ahead


Vienna, Austria, July 2, 2018
co-located with IEEE ICDCS 2018

Abstract: Real-time entertainment services deployed over the open, unmanaged Internet – streaming audio and video – account now for more than 70% of the Internet traffic and it is assumed that this number will reach 80% by 2021. The technology used for such services is commonly referred to as HTTP Adaptive Streaming (HAS) and is widely adopted by various platforms such as YouTube, Netflix, Flimmit, etc. thanks to the standardization of MPEG-DASH and HLS. This talk will provide an overview of HAS, the state of the art of selected deployment options, and reviews work-in-progress as well challenges ahead. The main challenge can be characterized by the fact that (i) content complexity increases, (ii) delay or latency are vital application requirements, and (iii) Quality of Experience cannot be neglected anymore.

Friday, June 15, 2018

DASH-IF awarded Excellence in DASH award at ACM MMSys 2018

The DASH Industry Forum Excellence in DASH Award at ACM MMSys 2018 acknowledges papers substantially addressing MPEG-DASH as the presentation format and are selected for presentation at ACM MMSys 2018. Preference is given to practical enhancements and developments which can sustain future commercial usefulness of DASH. The DASH format used should conform to the DASH-IF Interoperability Points as defined by It is a financial prize as follows: First place – €1000; Second place – €500; and Third place – €250. The winners are chosen by a DASH Industry Forum appointed committee and results are final.

Christian Timmerer (left) and Viswanathan (Vishy) Swaminathan (right)

The winners are chosen by a DASH Industry Forum appointed committee and results are final.

This year's award goes to the following papers (Two first places, one second, and one third):

1. Kevin Spiteri, Ramesh Sitaraman, Daniel Sparacio. From Theory to Practice: Improving Bitrate Adaptation in the DASH Reference Player

Christian Timmerer (from left to right), Kevin Spiteri, Ramesh Sitaraman, and Viswanathan (Vishy) Swaminathan
1. Abdelhak Bentaleb, Ali C. Begen, Roger Zimmermann, Saad Harous. Want to Play DASH? GTA: A Game Theoretic Approach for Adaptive Streaming over HTTP

Christian Timmerer (from left to right), Roger Zimmermann, Abdelhak Bentaleb, Ali C. Begen, and Viswanathan (Vishy) Swaminathan
2. Savino Dambra, Giuseppe Samela, Lucile Sassatelli, Romaric Pighetti, Ramon Aparicio Pardo, Anne-Marie Pinna-Déry. Film Editing: New Levers to Improve VR Streaming

Christian Timmerer (from left to right), Lucile Sassatelli, and Viswanathan (Vishy) Swaminathan
3. S. Silva, J. Bruneau-Queyreix, M. Lacaud, D. Négru, L. Réveillère, MUSLIN: Achieving High, Fairly Shared QoE Through Multi-Source Live Streaming

Christian Timmerer (from left to right), Simon Da Silva, and Viswanathan (Vishy) Swaminathan
We would like to congratulate all winners and hope seeing you next year at ACM MMSys 2019.

Tuesday, June 12, 2018

Packet Video 2018: Best Paper Award

Packet Video 2018 -- Best Paper Award

The TPC chairs of Packet Video 2018 identified the following candidates based on the reviews:
  • A. Zare, A. Aminlou, M. Hannuksela, 6K Effective Resolution with 4K HEVC Decoding Capability for OMAF-compliant 360° Video Streaming
  • J. Schneider, M. Bläser, M. Wien, Sparse Coding based Frequency Adaptive Loop Filtering for Video Coding
  • S. Arisu, A. Begen, Quickly Starting Media Streams Using QUIC
... and the winner of the Packet Video 2018 Best Paper Award is:

6K Effective Resolution with 4K HEVC Decoding Capability for OMAF-compliant 360° Video Streaming
A. Zare, A. Aminlou, M. Hannuksela

Abstract: The recent Omnidirectional MediA Format (OMAF) standard specifies delivery of 360° video content. OMAF supports only equirectangular (ERP) and cubemap projections and their region- wise packing with a limitation on video decoding capability to the maximum resolution of 4K (e.g., 4096x2048). Streaming of 4K ERP content allows only a limited viewport resolution, which is lower than the resolution of many current head-mounted displays (HMDs). In order to take the full advantage of those HMDs, this work proposes a specific mixed-resolution packing of 6K (6144x3072) ERP content and its realization in tile-based streaming, while complying with the 4K-decoding constraint and the High Efficiency Video Coding (HEVC) standard. Experimental results indicate that, using Zonal-PSNR test methodology, the proposed layout decreases the streaming bitrate up to 32% in terms of BD-rate, when compared to mixed-quality viewport-adaptive streaming of 4K ERP as an alternative solution.

T. Schierl (left), A. Zare (middle), R. Zimmermann (right)
T. Schierl (left), A. Zare (middle), R. Zimmermann (right)

Thursday, June 7, 2018

Packet Video 2018: Dynamic Adaptive Point Cloud Streaming

Dynamic Adaptive Point Cloud Streaming

Mohammad Hosseini (University of Illinois at Urbana-Champaign (UIUC)) and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin Inc.)


Abstract: High-quality point clouds have recently gained interest as an emerging form of representing immersive 3D graphics. Unfortunately, these 3D media are bulky and severely bandwidth intensive, which makes it difficult for streaming to resource-limited and mobile devices. This has called researchers to propose efficient and adaptive approaches for streaming of high-quality point clouds.

In this paper, we run a pilot study towards dynamic adaptive point cloud streaming, and extend the concept of dynamic adaptive streaming over HTTP (DASH) towards DASH-PC, a dynamic adaptive bandwidth-efficient and view-aware point cloud streaming system. DASH-PC can tackle the huge bandwidth demands of dense point cloud streaming while at the same time can semantically link to human visual acuity to maintain high visual quality when needed. In order to describe the various quality representations, we propose multiple thinning approaches to spatially sub-sample point clouds in the 3D space, and design a DASH Media Presentation Description manifest specific for point cloud streaming. Our initial evaluations show that we can achieve significant bandwidth and performance improvement on dense point cloud streaming with minor negative quality impacts compared to the baseline scenario when no adaptations is applied.

Tuesday, June 5, 2018

Packet Video 2018: Investigation of YouTube regarding Content Provisioning for HTTP Adaptive Streaming

Investigation of YouTube regarding Content Provisioning for HTTP Adaptive Streaming

Armin Trattnig (Bitmovin Inc.), Christian Timmerer (Alpen-Adria-Universität Klagenfurt / Bitmovin Inc.), and Christopher Mueller (Bitmovin Inc.)


Abstract: About 300 hours of video are uploaded to YouTube every minute. The main technology to delivery YouTube content to various clients is HTTP adaptive streaming and the majority of today’s internet traffic comprises streaming audio and video. In this paper, we investigate content provisioning for HTTP adaptive streaming under predefined aspects representing content features and upload characteristics as well and apply it to YouTube. Additionally, we compare the YouTube’s content upload and processing functions with a commercially available video encoding service. The results reveal insights into YouTube’s content upload and processing functions and the methodology can be applied to similar services. All experiments conducted within the paper allow for reproducibility thanks to the usage of open source tools, publicly available datasets, and scripts used to conduct the experiments on virtual machines.

Monday, June 4, 2018

ACM MMSys 2018: Multi-Codec DASH Dataset

Multi-Codec DASH Dataset

Anatoliy Zabrovskiy (Petrozavodsk State University & Alpen-Adria-Universität Klagenfurt), Christian Feldmann (Bitmovin Inc.), and Christian Timmerer (Alpen-Adria-Universität Klagenfurt & Bitmovin Inc.)

Abstract: The number of bandwidth-hungry applications and services is constantly growing. HTTP adaptive streaming of audio-visual content accounts for the majority of today's internet traffic. Although the internet bandwidth increases also constantly, audio-visual compression technology is inevitable and we are currently facing the challenge to be confronted with multiple video codecs.

This paper proposes a multi-codec DASH dataset comprising AVC, HEVC, VP9, and AV1 in order to enable interoperability testing and streaming experiments for the efficient usage of these codecs under various conditions. We adopt state of the art encoding and packaging options and also provide basic quality metrics along with the DASH segments. Additionally, we briefly introduce a multi-codec DASH scheme and possible usage scenarios. Finally, we provide a preliminary evaluation of the encoding efficiency in the context of HTTP adaptive streaming services and applications.