Monday, July 15, 2024

Successful 5-year Evaluation of Christian Doppler Laboratory ATHENA

The Christian Doppler (CD) Laboratory ATHENA was established in October 2019 to tackle current and future research and deployment challenges of HTTP Adaptive Streaming (HAS) and emerging streaming methods. The goal of CD laboratories is to conduct application-oriented basic research, promote collaboration between universities and companies, and facilitate technology transfer. They are funded through a public-private partnership between companies and the Christian Doppler Research Association, which is funded by the Ministry for Digital and Economic Affairs and the National Foundation for Research, Technology, and Development (Nationalstiftung für Forschung, Technologie und Entwicklung (FTE)). ATHENA is supported by Bitmovin as a company partner.

The CD laboratories have a duration of seven years and undergo rigorous scientific review after two and five years. This spring, the CD lab ATHENA completed its 5-year evaluation, and we have just received official notification from the CDG that we have successfully passed the review. Consequently, it is time to briefly outline the main achievements during this second phase (i.e., years 2 to 5) of the CD lab ATHENA.

Before exploring the achievements, it’s important to highlight the ongoing relevance of research in video streaming, given its dominance in today’s Internet usage. The January 2024 Sandvine Internet Phenomena report revealed that video streaming accounts for 68% of fixed/wired Internet traffic and 64% for mobile Internet traffic. Specifically, Video on Demand (VoD) represents 54% of fixed/wired and 57% of mobile traffic, while live streaming contributes to 14% of fixed/wired and 7% of mobile traffic. The major services in this domain include YouTube and Netflix, each commanding more than 10% of the overall Internet traffic, with TikTok, Amazon Prime, and Disney+ also playing significant roles.

ATHENA is structured into four work packages, each with distinct objectives as detailed below:

  1. Content provisioning: Primarily involves video encoding for HAS, quality-aware encoding, learning-based encoding, and multi-codec HAS.
  2. Content delivery: Addresses HAS issues by utilizing edge computing, exchanging information between CDN/SDN and clients, providing network assistance for clients, and evaluating corresponding utilities.
  3. Content consumption: Focuses on bitrate adaptation schemes, playback improvements, context and user awareness, and studies on Quality of Experience (QoE).
  4. End-to-end aspects: Offers a comprehensive view of application and transport layer enhancements, Quality of Experience (QoE) models, low-latency HAS, and learning-based HAS.

During the 2nd phase of ATHENA’s work, we achieved significant results, including publications in respected academic journals and conferences. Specifically, our publications were featured in key multimediasignal processingcomputer networks & wireless communication, and computing systems venues, as categorized by Google Scholar under engineering and computer science. Some of the notable publications include IEEE Communications Surveys & Tutorials (impact factor: 35.6), IEEE Transactions on Image Processing (10.6), IEEE Internet of Things Journal (10.6), IEEE Transactions on Circuits and Systems for Video Technology (8.4), and IEEE Transactions on Multimedia (7.3).

Furthermore, we focused on technology transfer by submitting 16 invention disclosures, resulting in 13 patent applications (including provisionals). Collaborating with our company partner, we obtained 6 granted patents. Additionally, we’re pleased to report on the progress of our spin-off projects, as well as the funding secured for two FFG-funded projects named APOLLO and GAIA, and an EU Horizon Europe-funded innovation action called SPIRIT.

The ATHENA team was also active in organizing scientific events such as workshops, special sessions, and special issues at IEEE ICME, ACM MM, ACM MMSys, ACM CoNEXT, IEEE ICIP, PCS, and IEEE Network. We also contributed to reproducibility in research through open source tools (e.g., Video Complexity Analyzer and LLL-CAdViSE) and datasets (e.g., Video Complexity Dataset and Multi-Codec Ultra High Definition 8K MPEG-DASH Dataset) among others.

We also note our contributions to the applications of AI in video coding & streaming, for example in video coding and video streaming as follows:

A major outcome of the second phase is the successful defense of the inaugural cohort of PhD students:

Two postdoctoral scholars have reached a significant milestone on their path toward habilitation

During the second phase, each work package produced excellent publications in their domain, briefly highlighted in the following. Content provisioning (WP-1) focuses mainly on video coding for HAS (43 papers) and immersive media coding for streaming (4 papers). The former can be further subdivided into the following topic areas:

  • Video complexity: spatial and temporal feature extraction (4 papers)
  • Compression efficiency improvement of individual representations (1 paper)
  • Encoding parameter prediction for HAS (9 papers)
  • Efficient bitrate ladder construction (4 papers)
  • Fast multi-rate encoding (3 papers)
  • Data security and data hiding (7 papers)
  • Energy-efficient video encoding for HAS (4 papers)
  • Advancing video quality evaluation (7 papers)
  • Datasets (4 papers)

Content delivery (WP-2) dealt with SDN/CDN assistance for HAS, edge computing support for HAS, and network-embedded media streaming support, resulting in 21 papers. Content consumption (WP-3) worked on QoE enhancement mechanisms at client-side and QoE- and energy-aware content consumption (11 papers). Finally, end-to-end Aspects (WP-4) produced 15 papers in the area of end-to-end QoE improvement in multimedia video streaming. We reported 94 papers published/accepted for the ATHENA 5-year evaluation.

In this context, it is also important to highlight the collaboration within ATHENA, which has resulted in joint publications across various work packages (WPs) and with other ITEC members. For example, collaborations with Prof. Schöffmann (FWF-funded project OVID), FFG-funded projects APOLLO/GAIA, and EU-funded project SPIRIT. In addition, we would like to acknowledge our international collaborators, such as Prof. Hongjie He from Southwest Jiaotong University, Prof. Patrick Le Callet from the University of Nantes, Prof. Wassim Hamidouche from the Technology Innovation Institute (UAE), Dr. Sergey Gorinsky from IMDEA, Dr. Abdelhak Bentaleb from Concordia University, Dr. Raimund Schatz from AIT, and Prof. Pablo Cesar from CWI. We are also pleased to report the successful technology transfers to Bitmovin, particularly CAdViSE (WP-4) and WISH ABR (WP-3). Regular “Fun with ATHENA” meetups and Break-out Groups are utilized for in-depth discussions about innovations and potential technology transfers.

Over the next two years, the ATHENA project will prioritize the development of deep neural network/AI-based image and video coding within the context of HAS. This includes energy- and cost-aware video coding for HAS, immersive video coding such as volumetric video and holography, as well as Quality of Experience (QoE) and energy-aware content consumption for HAS (including energy-efficient, AI-based live video streaming) and generative AI for HAS.

Thanks to all current and former ATHENA team members: Samira Afzal, Hadi Amirpour, Jesús Aguilar Armijo, Emanuele Artioli, Christian Bauer, Alexis Boniface, Ekrem Çetinkaya, Reza Ebrahimi, Alireza Erfanian, Reza Farahani, Mohammad Ghanbari (late), Milad Ghanbari, Mohammad Ghasempour, Selina Zoë Haack, Hermann Hellwagner, Manuel Hoi, Andreas Kogler, Gregor Lammer, Armin Lachini, David Langmeier, Sandro Linder, Daniele Lorenzi, Vignesh V Menon, Minh Nguyen, Engin Orhan, Lingfeng Qu, Jameson Steiner, Nina Stiller, Babak Taraghi, Farzad Tashtarian, Yuan Yuan, and Yiying Wei. Finally, thanks to ITEC support staff Martina Steinbacher, Nina Stiller, Margit Letter, Marion Taschwer, and Rudolf Messner.

We also would like to thank the Christian Doppler Research Association for continuous support, organizing the review, and the reviewer for constructive feedback!

Monday, July 1, 2024

HTTP Adaptive Streaming – Quo Vadis? (2024)

Telecom Seminar Series at TII, Jun 27, 2024, 04:00 PM Dubai

Abstract: Video traffic on the Internet is constantly growing; networked multimedia applications consume a predominant share of the available Internet bandwidth. A major technical breakthrough and enabler in multimedia systems research and of industrial networked multimedia services certainly was the HTTP Adaptive Streaming (HAS) technique. This resulted in the standardization of MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH) which, together with HTTP Live Streaming (HLS), is widely used for multimedia delivery in today’s networks. Existing challenges in multimedia systems research deal with the trade-off between (i) the ever-increasing content complexity, (ii) various requirements with respect to time (most importantly, latency), and (iii) quality of experience (QoE). Optimizing towards one aspect usually negatively impacts at least one of the other two aspects if not both. This situation sets the stage for our research work in the ATHENA Christian Doppler (CD) Laboratory (Adaptive Streaming over HTTP and Emerging Networked Multimedia Services; https://athena.itec.aau.at/), jointly funded by public sources and industry. In this talk, we will present selected novel approaches and research results of the first year of the ATHENA CD Lab’s operation. We will highlight HAS-related research on (i) multimedia content provisioning (machine learning for video encoding); (ii) multimedia content delivery (support of edge processing and virtualized network functions for video networking); (iii) multimedia content consumption and end-to-end aspects (player-triggered segment retransmissions to improve video playout quality); and (iv) novel QoE investigations (adaptive point cloud streaming). We will also put the work into the context of international multimedia systems research. 

Thursday, June 6, 2024

Video Streaming: Then, Now, Future

I'm happy to share my slides from my public/inaugural lecture at the University of Klagenfurt on June 5, 2022.

  • Title: "Video Streaming: Then, Now, Future"
  • June 5, 2024, 17:00, University of Klagenfurt, Hörsaal 2
In my public lecture, I provide insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. I'm also presenting provocative contributions of his own that have significantly influenced the industry. I conclude by looking at future challenges and invite the audience to join in a discussion (e.g., in the comments below).

Saturday, May 18, 2024

MPEG news: a report from the 146th meeting

 This blog post is based on the MPEG press release and has been modified/updated here to focus on and highlight research aspects. This version of the blog post will also be posted at ACM SIGMM Records.


The 146th MPEG meeting was held in Rennes, France from 22-26 April 2024, and the official press release can be found here. It comprises the following highlights:
  • AI-based Point Cloud Coding*: Call for proposals focusing on AI-driven point cloud encoding for applications such as immersive experiences and autonomous driving.
  • Object Wave Compression*: Call for interest in object wave compression for enhancing computer holography transmission.
  • Open Font Format: Committee Draft of the fifth edition, overcoming previous limitations like the 64K glyph encoding constraint.
  • Scene Description: Ratified second edition, integrating immersive media objects and extending support for various data types.
  • MPEG Immersive Video (MIV): New features in the second edition, enhancing the compression of immersive video content.
  • Video Coding Standards: New editions of AVC, HEVC, and Video CICP, incorporating additional SEI messages and extended multiview profiles.
  • Machine-Optimized Video Compression*: Advancement in optimizing video encoders for machine analysis.
  • MPEG-I Immersive Audio*: Reached Committee Draft stage, supporting high-quality, real-time interactive audio rendering for VR/AR/MR.
  • Video-based Dynamic Mesh Coding (V-DMC)*: Committee Draft status for efficiently storing and transmitting dynamic 3D content.
  • LiDAR Coding*: Enhanced efficiency and responsiveness in LiDAR data processing with the new standard reaching Committee Draft status.
* ... covered in this column.

AI-based Point Cloud Coding

MPEG issued a Call for Proposals (CfP) on AI-based point cloud coding technologies as a result from ongoing explorations regarding use cases, requirements, and the capabilities of AI-driven point cloud encoding, particularly for dynamic point clouds.

With recent significant progress in AI-based point cloud compression technologies, MPEG is keen on studying and adopting AI methodologies. MPEG is specifically looking for learning-based codecs capable of handling a broad spectrum of dynamic point clouds, which are crucial for applications ranging from immersive experiences to autonomous driving and navigation. As the field evolves rapidly, MPEG expects to receive multiple innovative proposals. These may include a unified codec, capable of addressing multiple types of point clouds, or specialized codecs tailored to meet specific requirements, contingent upon demonstrating clear advantages. MPEG has therefore publicly called for submissions of AI-based point cloud codecs, aimed at deepening the understanding of the various options available and their respective impacts. Submissions that meet the requirements outlined in the call will be invited to provide source code for further analysis, potentially laying the groundwork for a new standard in AI-based point cloud coding. MPEG welcomes all relevant contributions and looks forward to evaluating the responses.

Research aspects: In-depth analysis of algorithms, techniques, and methodologies, including a comparative study of various AI-driven point cloud compression techniques to identify the most effective approaches. Other aspects include creating or improving learning-based codecs that can handle dynamic point clouds as well as metrics for evaluating the performance of these codecs in terms of compression efficiency, reconstruction quality, computational complexity, and scalability. Finally, the assessment of how improved point cloud compression can enhance user experiences would be worthwhile to consider here also.

Object Wave Compression

A Call for Interest (CfI) in object wave compression has been issued by MPEG. Computer holography, a 3D display technology, utilizes a digital fringe pattern called a computer-generated hologram (CGH) to reconstruct 3D images from input 3D models. Holographic near-eye displays (HNEDs) reduce the need for extensive pixel counts due to their wearable design, positioning the display near the eye. This positions HNEDs as frontrunners for the early commercialization of computer holography, with significant research underway for product development. Innovative approaches facilitate the transmission of object wave data, crucial for CGH calculations, over networks. Object wave transmission offers several advantages, including independent treatment from playback device optics, lower computational complexity, and compatibility with video coding technology. These advancements open doors for diverse applications, ranging from entertainment experiences to real- time two-way spatial transmissions, revolutionizing fields such as remote surgery and virtual collaboration. As MPEG explores object wave compression for computer holography transmission, a Call for Interest seeks contributions to address market needs in this field.

Research aspects: Apart from compression efficiency, lower computation complexity, and compatibility with video coding technology, there is a range of research aspects, including the design, implementation, and evaluation of coding algorithms within the scope of this CfI. The QoE of computer-generated holograms (CGHs) together with holographic near-eye displays (HNEDs) is yet another dimension to be explored.

Machine-Optimized Video Compression

MPEG started working on a technical report regarding to the "Optimization of Encoders and Receiving Systems for Machine Analysis of Coded Video Content". In recent years, the efficacy of machine learning-based algorithms in video content analysis has steadily improved. However, an encoder designed for human consumption does not always produce compressed video conducive to effective machine analysis. This challenge lies not in the compression standard but in optimizing the encoder or receiving system. The forthcoming technical report addresses this gap by showcasing technologies and methods that optimize encoders or receiving systems to enhance machine analysis performance.

Research aspects: Video (and audio) coding for machines has been recently addressed by MPEG Video and Audio working groups, respectively. MPEG Joint Video Experts Team with ITU-T SG16, also known as JVET, joined this space with a technical report, but research aspects remain unchanged, i.e., coding efficiency, metrics, and quality aspects for machine analysis of compressed/coded video content.

MPEG-I Immersive Audio

MPEG Audio Coding enters the "immersive space" with MPEG-I immersive audio and its corresponding reference software. The MPEG-I immersive audio standard sets a new benchmark for compact and lifelike audio representation in virtual and physical spaces, catering to Virtual, Augmented, and Mixed Reality (VR/AR/MR) applications. By enabling high-quality, real-time interactive rendering of audio content with six degrees of freedom (6DoF), users can experience immersion, freely exploring 3D environments while enjoying dynamic audio. Designed in accordance with MPEG's rigorous standards, MPEG-I immersive audio ensures efficient distribution across bandwidth-constrained networks without compromising on quality. Unlike proprietary frameworks, this standard prioritizes interoperability, stability, and versatility, supporting both streaming and downloadable content while seamlessly integrating with MPEG-H 3D audio compression. MPEG-I's comprehensive modeling of real-world acoustic effects, including sound source properties and environmental characteristics, guarantees an authentic auditory experience. Moreover, its efficient rendering algorithms balance computational complexity with accuracy, empowering users to finely tune scene characteristics for desired outcomes.

Research aspects: Evaluating QoE of MPEG-I immersive audio-enabled environments as well as the efficient audio distribution across bandwidth-constrained networks without compromising on audio quality are two important research aspects to be addressed by the research community.

Video-based Dynamic Mesh Coding (V-DMC)

Video-based Dynamic Mesh Compression (V-DMC) represents a significant advancement in 3D content compression, catering to the ever-increasing complexity of dynamic meshes used across various applications, including real-time communications, storage, free-viewpoint video, augmented reality (AR), and virtual reality (VR). The standard addresses the challenges associated with dynamic meshes that exhibit time-varying connectivity and attribute maps, which were not sufficiently supported by previous standards. Video-based Dynamic Mesh Compression promises to revolutionize how dynamic 3D content is stored and transmitted, allowing more efficient and realistic interactions with 3D content globally.

Research aspects: V-DMC aims to allow "more efficient and realistic interactions with 3D content", which are subject to research, i.e., compression efficiency vs. QoE in constrained networked environments.

Low Latency, Low Complexity LiDAR Coding

Low Latency, Low Complexity LiDAR Coding underscores MPEG's commitment to advancing coding technologies required by modern LiDAR applications across diverse sectors. The new standard addresses critical needs in the processing and compression of LiDAR-acquired point clouds, which are integral to applications ranging from automated driving to smart city management. It provides an optimized solution for scenarios requiring high efficiency in both compression and real-time delivery, responding to the increasingly complex demands of LiDAR data handling. LiDAR technology has become essential for various applications that require detailed environmental scanning, from autonomous vehicles navigating roads to robots mapping indoor spaces. The Low Latency, Low Complexity LiDAR Coding standard will facilitate a new level of efficiency and responsiveness in LiDAR data processing, which is critical for the real-time decision-making capabilities needed in these applications. This standard builds on comprehensive analysis and industry feedback to address specific challenges such as noise reduction, temporal data redundancy, and the need for region-based quality of compression. The standard also emphasizes the importance of low latency coding to support real-time applications, essential for operational safety and efficiency in dynamic environments.

Research aspects: This standard effectively tackles the challenge of balancing high compression efficiency with real-time capabilities, addressing these often conflicting goals. Researchers may carefully consider these aspects and make meaningful contributions.

The 147th MPEG meeting will be held in Sapporo, Japan, from July 15-19, 2024. Click here for more information about MPEG meetings and their developments.

Friday, May 17, 2024

MPEG news: a report from the 145th meeting

This blog post is based on the MPEG press release and has been modified/updated here to focus on and highlight research aspects. This version of the blog post will also be posted at ACM SIGMM Records.


The 145th MPEG meeting was held online from 22-26 January 2024, and the official press release can be found here. It comprises the following highlights:
  • Latest Edition of the High Efficiency Image Format Standard Unveils Cutting-Edge Features for Enhanced Image Decoding and Annotation
  • MPEG Systems finalizes Standards supporting Interoperability Testing
  • MPEG finalizes the Third Edition of MPEG-D Dynamic Range Control
  • MPEG finalizes the Second Edition of MPEG-4 Audio Conformance
  • MPEG Genomic Coding extended to support Transport and File Format for Genomic Annotations
  • MPEG White Paper: Neural Network Coding (NNC) – Efficient Storage and Inference of Neural Networks for Multimedia Applications
This column will focus on the High Efficiency Image Format (HEIF) and interoperability testing. As usual, a brief update on MPEG-DASH et al. will be provided.

High Efficiency Image Format (HEIF)

The High Efficiency Image Format (HEIF) is a widely adopted standard in the imaging industry that continues to grow in popularity. At the 145th MPEG meeting, MPEG Systems (WG 3) ratified its third edition, which introduces exciting new features, such as progressive decoding capabilities that enhance image quality through a sequential, single-decoder instance process. With this enhancement, users can decode bitstreams in successive steps, with each phase delivering perceptible improvements in image quality compared to the preceding step. Additionally, the new edition introduces a sophisticated data structure that describes the spatial configuration of the camera and outlines the unique characteristics responsible for generating the image content. The update also includes innovative tools for annotating specific areas in diverse shapes, adding a layer of creativity and customization to image content manipulation. These annotation features cater to the diverse needs of users across various industries.

Research aspects: Progressive coding has been a part of modern image coding formats for some time now. However, the inclusion of supplementary metadata provides an opportunity to explore new use cases that can benefit both user experience (UX) and quality of experience (QoE) in academic settings.

Interoperability Testing

MPEG standards typically comprise format definitions (or specifications) to enable interoperability among products and services from different vendors. Interestingly, MPEG goes beyond these format specifications and provides reference software and conformance bitstreams, allowing conformance testing.

At the 145th MPEG meeting, MPEG Systems (WG 3) finalized two standards comprising conformance and reference software by promoting it to the Final Draft International Standard (FDIS), the final stage of standards development. The finalized standards, ISO/IEC 23090-24 and ISO/IEC 23090-25, showcase the pinnacle of conformance and reference software for scene description and visual volumetric video-based coding data, respectively.

ISO/IEC 23090-24 focuses on conformance and reference software for scene description, providing a comprehensive reference implementation and bitstream tailored for conformance testing related to ISO/IEC 23090-14, scene description. This standard opens new avenues for advancements in scene depiction technologies, setting a new standard for conformance and software reference in this domain.

Similarly, ISO/IEC 23090-25 targets conformance and reference software for the carriage of visual volumetric video-based coding data. With a dedicated reference implementation and bitstream, this standard is poised to elevate the conformance testing standards for ISO/IEC 23090-10, the carriage of visual volumetric video-based coding data. The introduction of this standard is expected to have a transformative impact on the visualization of volumetric video data.

At the same 145th MPEG meeting, MPEG Audio Coding (WG6) celebrated the completion of the second edition of ISO/IEC 14496-26, audio conformance, elevating it to the Final Draft International Standard (FDIS) stage. This significant update incorporates seven corrigenda and five amendments into the initial edition, originally published in 2010.

ISO/IEC 14496-26 serves as a pivotal standard, providing a framework for designing tests to ensure the compliance of compressed data and decoders with the requirements outlined in ISO/IEC 14496-3 (MPEG-4 Audio). The second edition reflects an evolution of the original, addressing key updates and enhancements through diligent amendments and corrigenda. This latest edition, now at the FDIS stage, marks a notable stride in MPEG Audio Coding's commitment to refining audio conformance standards and ensuring the seamless integration of compressed data within the MPEG-4 Audio framework.

These standards will be made freely accessible for download on the official ISO website, ensuring widespread availability for industry professionals, researchers, and enthusiasts alike.

Research aspects: Reference software and conformance bitstreams often serve as the basis for further research (and development) activities and, thus, are highly appreciated. For example, reference software of video coding formats (e.g., HM for HEVC, VM for VVC) can be used as a baseline when improving coding efficiency or other aspects of the coding format.

MPEG-DASH Updates

The current status of MPEG-DASH is shown in the figure below.
MPEG-DASH Status, January 2024.

The following most notable aspects have been discussed at the 145th MPEG meeting and adopted into ISO/IEC 23009-1, which will eventually become the 6th edition of the MPEG-DASH standard:
  • It is now possible to pass CMCD parameters sid and cid via the MPD URL.
  • Segment duration patterns can be signaled using SegmentTimeline.
  • Definition of a background mode of operation, which allows a DASH player to receive MPD updates and listen to events without possibly decrypting or rendering any media.
Additionally, the technologies under consideration (TuC) document has been updated with means to signal maximum segment rate, extend copyright license signaling, and improve haptics signaling in DASH. Finally, REAP is progressing towards FDIS but not yet there and most details will be discussed in the upcoming AhG period.

The 146th MPEG meeting will be held in Rennes, France, from April 22-26, 2024. Click here for more information about MPEG meetings and their developments.


Thursday, May 16, 2024

Assistant Professor (postdoc) with QA option (tenure track) (all genders welcome)

Department of Information Technology 

Scientific Staff  | Full time

Application deadline: 12 June 2024

Reference code: 673/23 [URL]

The University of Klagenfurt, with approximately 1,500 employees and over 12,000 students, is located in the Alps-Adriatic region and consistently achieves excellent placements in rankings. The motto “per aspera ad astra” underscores our firm commitment to the pursuit of excellence in all activities in research, teaching, and university management. The principles of equality, diversity, health, sustainability, and compatibility of work and family life serve as the foundation for our work at the university.

The University of Klagenfurt is pleased to announce the following open position at the Department of Information Technology at the Faculty of Technical Sciences with an expected starting date of 7 January 2025:

Assistant Professor (postdoc) with QA option (tenure track) (all genders welcome)

Level of employment: 100 % (40 hours/week)

Minimum salary: € 66,532.20 per annum (gross), Classification according to collective agreement: B1 lit.b

Limited to: 6 years (with the option of transitioning to a permanent contract)

Application deadline: 12 June 2024

Reference code: 673/23

Area of responsibility

  • Independent research in computer science and communication technologies with the aim of habilitation
  • Independent delivery of courses in English and German using established and innovative methods
  • Participation in the research and teaching projects run by the organisational unit
  • Acquisition and management of third-party funded projects
  • Supervision of students at Bachelor, Master, and doctoral levels
  • Participation in organisational and administrative tasks and in quality assurance measures
  • Contribution to expanding the international scientific and cultural contacts of the organisational unit
  • Participation in public relations activities including third mission

Requirements

  • Doctoral degree in the field of computer science, information and communications engineering, electrical engineering or related fields completed at a domestic or foreign higher education institution
  • Relevant and good publication record in the field of multimedia systems
  • A strong background in one or both fields
    • (Distributed) multimedia systems, preferably covering video in the context of video coding, communication, streaming, and quality of experience (QoE);
    • Machine learning, preferably in the context of (distributed) multimedia systems or/and computer vision
  • Very good scientific communication and dissemination skills (scientific writing and oral presentations)
  • Excellent programming skills in multimedia systems or/and machine learning
  • Excellent spoken and written English skills

Desired skills

  • Experience in the acquisition and running of third-party funded projects and readiness to play an active role in third-party funded projects and their acquisition
  • Didactic competence and proven successful teaching experience
  • Willingness to actively participate in research, teaching, and administration
  • Scientific curiosity and enthusiasm for imparting knowledge
  • Gender mainstreaming and diversity management skills
  • Leadership and teamwork skills
  • Good spoken and written German skills

Additional information

Our offer:

This tenure track position includes the option of negotiating a qualification agreement in accordance with Section 27 of the collective agreement for university staff for the areas of research, independent teaching, management and administrative tasks, and experience gained externally (QA). The employment contract is concluded for the position as Assistant Professor (postdoc) with QA option and stipulates a starting salary of € 4,752.30 gross per month (14 times a year; previous experience deemed relevant to the job can be recognised in accordance with the collective agreement). Upon entering into the qualification agreement, the position shall be classified as an Assistant Professorship with a minimum gross salary of € 5,595.60 per month. Upon fulfilling the stipulations of the qualification agreement, the post-holder shall be promoted to tenured Associate Professor with a minimum gross salary of € 6,055.70 per month.

The University of Klagenfurt also offers:

  • Personal and professional advanced training courses, management, and career coaching
  • Numerous attractive additional benefits, see also https://jobs.aau.at/en/the-university-as-employer/
  • Diversity- and family-friendly university culture
  • The opportunity to live and work in the attractive Alps-Adriatic region with a wide range of leisure activities in the spheres of culture, nature, and sports

The application:

If you are interested in this position, please apply in German or English, providing a convincing application including the following:

  • Letter of application, including – but not limited to – motivation as well as a concise research and teaching statement, respectively
  • Curriculum vitae, including publication and lecture lists, as well as details and an explanation of research and teaching activities (please do not include a photo)

Furthermore:

  • Proof of all completed higher education programmes (certificates, supplements, if applicable)
  • Outline of the content of the doctoral programme (listing academic achievements, intermediate examinations, etc.) as well as the content of the thesis (summary)
  • Other documentary evidence that may be relevant to this announcement (see prerequisites and desired qualifications)
  • Please provide three references (contact details of persons who the university may contact by telephone for information purposes)

To apply, please select the position with the reference code 673/23 in the category “Scientific Staff” using the link “Apply for this position” in the job portal at https://jobs.aau.at/en/. >>> LINK <<<

Candidates must furnish proof that they meet the required qualifications by 12 June 2024 at the latest.

For further information on this specific vacancy, please contact Prof. Christian Timmerer (christian.timmerer@aau.at). General information about the university as an employer can be found at https://jobs.aau.at/en/the-university-as-employer/. At the University of Klagenfurt, recruitment and staff matters are accompanied not only by the authority responsible for the recruitment procedure but also by the Equal Opportunities Working Group and, if necessary, by the Representative for Disabled Persons.

The University of Klagenfurt aims to increase the proportion of women and therefore specifically invites qualified women to apply for the position. Where the qualification is equivalent, women will be given preferential consideration.

As part of its human resources policy, the University of Klagenfurt places particular emphasis on anti-discrimination, equal opportunities, and diversity.

People with disabilities or chronic diseases, who fulfil the requirements, are particularly encouraged to apply.

Travel and accommodation costs incurred during the application process will not be refunded.

Translations into other languages shall serve informational purposes only. Solely the version advertised in the University Bulletin (Mitteilungsblatt) shall be legally binding. 

Wednesday, January 17, 2024

Streaming week in Denver: MOQ interim + Mile-High Video + SVTA Segments

The next Media over QUIC (MOQ) interim meeting will be hosted by Comcast in Denver (Feb. 6-8). It is open to public participation and it is free. Details are here: https://github.com/moq-wg/wg-materials/blob/main/interim-24-02/arrangements.md

Then, the ACM Mile-High Video conference will be just a few miles away (including a Latency Party during the Super Bowl) between Feb. 11-14. Details are here: https://www.mile-high.video/technical-program


Finally, SVTA Segments 2024 will take place at the same venue on Feb. 14th. Details are here: https://segments2024.svta.org/


You can benefit from the early (and combo) registration rates for Mile-High Video and Segments.