Friday, September 16, 2022

ECAS-ML: Edge Computing Assisted Adaptation Scheme with Machine Learning for HTTP Adaptive Streaming

 28th International Conference on Multimedia Modeling (MMM)

April 05-08, 2022 | Phu Quoc, Vietnam

Conference Website

[PDF][Poster]

Jesús Aguilar Armijo, Ekrem Çetinkaya, Christian Timmerer, and Hermann Hellwagner
Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt

Abstract: As the video streaming traffic in mobile networks is increasing, improving the content delivery process becomes crucial, e.g., by utilizing edge computing support. At an edge node, we can deploy adaptive bitrate (ABR) algorithms with a better understanding of network behavior and access to radio and player metrics. In this work, we present ECAS-ML, Edge Assisted Adaptation Scheme for HTTP Adaptive Streaming with Machine Learning. ECAS-ML focuses on managing the tradeoff among bitrate, segment switches and stalls to achieve a higher quality of experience (QoE). For that purpose, we use machine learning techniques to analyze radio throughput traces and predict the best parameters of our algorithm to achieve better performance. The results show that ECAS-ML outperforms other client-based and edge-based ABR algorithms.

Keywords: HTTP Adaptive Streaming, Edge Computing, Content Delivery, Network-assisted Video Streaming, Quality of Experience, Machine Learning.

Acknowledgments: The financial support of the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association, is gratefully acknowledged. Christian Doppler Laboratory ATHENA: https://athena.itec.aau.at/.

Monday, September 12, 2022

HxL3: Optimized Delivery Architecture for HTTP Low-Latency Live Streaming

 HxL3: Optimized Delivery Architecture for HTTP Low-Latency Live Streaming

IEEE Transactions on Multimedia

[PDF]

Farzad Tashtarian (Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt), Abdelhak Bentaleb (NationalUniversity of Singapore), Alireza Erfanian (Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt), Hermann Hellwagner (Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt), Christian Timmerer (Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt), and Roger Zimmermann (National University of Singapore).

  

Abstract: While most of the HTTP adaptive streaming (HAS) traffic continues to be video-on-demand (VoD), more users have started generating and delivering live streams with high quality through popular online streaming platforms. Typically, the video contents are generated by streamers and being watched by large audiences which are geographically distributed far away from the streamers’ locations.

The locations of streamers and audiences create a significant challenge in delivering HAS-based live streams with low latency and high quality. Any problem in the delivery paths will result in a reduced viewer experience. In this paper, we propose HxL3, a novel architecture for low-latency live streaming. HxL3 is agnostic to the protocol and codecs that can work equally with existing HAS-based approaches. By holding the minimum number of live media segments through efficient caching and prefetching policies at the edge, improved transmissions, as well as transcoding capabilities, HxL3 is able to achieve high viewer experiences across the Internet by alleviating rebuffering and substantially reducing initial startup delay and live stream latency. HxL3 can be easily deployed and used. Its performance has been evaluated using real live stream sources and entities that are distributed worldwide. Experimental results show the superiority of the proposed architecture and give good insights into how low latency live streaming is working.

Index TermsLive streaming, HAS, DASH, HLS, CMAF, edge computing, low latency, caching, prefetching, transcoding. 

Acknowledgements: The financial support of the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association, is gratefully acknowledged. Christian Doppler Laboratory ATHENA: https://athena.itec.aau.at/.  



Saturday, September 10, 2022

MoViDNN: A Mobile Platform for Evaluating Video Quality Enhancement with Deep Neural Networks

28th International Conference on Multimedia Modeling (MMM)

April 05-08, 2022 | Phu Quoc, Vietnam

Conference Website

[PDF]

Ekrem ÇetinkayaMinh Nguyen, and Christian Timmerer
Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt

Abstract: Deep neural network (DNN) based approaches have been intensively studied to improve video quality thanks to their fast advancement in recent years. These approaches are designed mainly for desktop devices due to their high computational cost. However, with the increasing performance of mobile devices in recent years, it has become possible to execute DNN-based approaches in mobile devices. Despite having the required computational power, utilizing DNNs to improve the video quality for mobile devices is still an active research area. In this paper, we propose an open-source mobile platform, namely MoViDNN, to evaluate DNN-based video quality enhancement methods, such as super-resolution, denoising, and deblocking. Our proposed platform can be used to evaluate the DNN-based approaches both objectively and subjectively. For objective evaluation, we report common metrics such as execution time, PSNR, and SSIM. For subjective evaluation, Mean Score Opinion (MOS) is reported. The proposed platform is available publicly at https://github.com/cd-athena/MoViDNN.

Keywords: Super resolution, Deblocking, Deep Neural Networks, Mobile Devices

Acknowledgments: The financial support of the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association, is gratefully acknowledged. Christian Doppler Laboratory ATHENA: https://athena.itec.aau.at/.

 

Thursday, September 8, 2022

VCA: Video Complexity Analyzer

The 13th ACM Multimedia Systems Conference (ACM MMSys 2022) Open Dataset and Software (ODS) track

June 14–17, 2022 |  Athlone, Ireland

Conference Website
[PDF]

Vignesh V Menon, Christian Feldmann (Bitmovin, Klagenfurt), Hadi Amirpour,  Mohammad Ghanbari, and Christian Timmerer
Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität Klagenfurt

Abstract:

VCA in content-adaptive encoding applications

For online analysis of the video content complexity in live streaming applications, selecting low-complexity features is critical to ensure low-latency video streaming without disruptions. To this light, for each video (segment), two features, i.e., the average texture energy and the average gradient of the texture energy, are determined. A DCT-based energy function is introduced to determine the block-wise texture of each frame. The spatial and temporal features of the video (segment) are derived from the DCT-based energy function. The Video Complexity Analyzer (VCA) project aims to provide an efficient spatial and temporal complexity analysis of each video (segment) which can be used in various applications to find the optimal encoding decisions. VCA leverages some of the x86 Single Instruction Multiple Data (SIMD) optimizations for Intel CPUs and multi-threading optimizations to achieve increased performance. VCA is an open-source library published under the GNU GPLv3 license.

Github: https://github.com/cd-athena/VCA
Online documentation: https://cd-athena.github.io/VCA/

Acknowledgments: The financial support of the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association, is gratefully acknowledged. Christian Doppler Laboratory ATHENA: https://athena.itec.aau.at/.

Thursday, August 11, 2022

MPEG news: a report from the 139th meeting

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects. Additionally, this version of the blog post will also be posted at ACM SIGMM Records.


The 139th MPEG meeting was once again held as an online meeting, and the official press release can be found here and comprises the following items:
  • MPEG Issues Call for Evidence for Video Coding for Machines (VCM)
  • MPEG Ratifies the Third Edition of Green Metadata, a Standard for Energy-Efficient Media Consumption
  • MPEG Completes the Third Edition of the Common Media Application Format (CMAF) by adding Support for 8K and High Frame Rate for High Efficiency Video Coding
  • MPEG Scene Descriptions adds Support for Immersive Media Codecs
  • MPEG Starts New Amendment of VSEI containing Technology for Neural Network-based Post Filtering
  • MPEG Starts New Edition of Video Coding-Independent Code Points Standard
  • MPEG White Paper on the Third Edition of the Common Media Application Format
In this report, I’d like to focus on VCM, Green Metadata, CMAF, VSEI, and a brief update about DASH (as usual).

Video Coding for Machines (VCM)

MPEG’s exploration work on Video Coding for Machines (VCM) aims at compressing features for machine-performed tasks such as video object detection and event analysis. As neural networks increase in complexity, architectures such as collaborative intelligence, whereby a network is distributed across an edge device and the cloud, become advantageous. With the rise of newer network architectures being deployed amongst a heterogenous population of edge devices, such architectures bring flexibility to systems implementers. Due to such architectures, there is a need to efficiently compress intermediate feature information for transport over wide area networks (WANs). As feature information differs substantially from conventional image or video data, coding technologies and solutions for machine usage could differ from conventional human-viewing-oriented applications to achieve optimized performance. With the rise of machine learning technologies and machine vision applications, the amount of video and images consumed by machines has rapidly grown. Typical use cases include intelligent transportation, smart city technology, intelligent content management, etc., which incorporate machine vision tasks such as object detection, instance segmentation, and object tracking. Due to the large volume of video data, extracting and compressing the feature from a video is essential for efficient transmission and storage. Feature compression technology solicited in this Call for Evidence (CfE) can also be helpful in other regards, such as computational offloading and privacy protection.

Over the last three years, MPEG has investigated potential technologies for efficiently compressing feature data for machine vision tasks and established an evaluation mechanism that includes feature anchors, rate-distortion-based metrics, and evaluation pipelines. The evaluation framework of VCM depicted below comprises neural network tasks (typically informative) at both ends as well as VCM encoder and VCM decoder, respectively. The normative part of VCM typically includes the bitstream syntax which implicitly defines the decoder whereas other parts are usually left open for industry competition and research.


Further details about the CfP and how interested parties can respond can be found in the official press release here.

Research aspects: the main research area for coding-related standards is certainly compression efficiency (and probably runtime). However, this video coding standard will not target humans as video consumers but machines. Thus, video quality and, in particular, Quality of Experience needs to be interpreted differently, which could be another worthwhile research dimension to be studied in the future.

Green Metadata

MPEG Systems has been working on Green Metadata for the last ten years to enable the adaptation of the client’s power consumption according to the complexity of the bitstream. Many modern implementations of video decoders can adjust their operating voltage or clock speed to adjust the power consumption level according to the required computational power. Thus, if the decoder implementation knows the variation in the complexity of the incoming bitstream, then the decoder can adjust its power consumption level to the complexity of the bitstream. This will allow less energy use in general and extended video playback for the battery-powered devices.

The third edition enables support for Versatile Video Coding (VVC, ISO/IEC 23090-3, a.k.a. ITU-T H.266) encoded bitstreams and enhances the capability of this standard for real-time communication applications and services. While finalizing the support of VVC, MPEG Systems has also started the development of a new amendment to the Green Metadata standard, adding the support of Essential Video Coding (EVC, ISO/IEC 23094-1) encoded bitstreams.

Research aspects: reducing global greenhouse gas emissions will certainly be a challenge for humanity in the upcoming years. The amount of data on today's internet is dominated by video, which all consumes energy from production to consumption. Therefore, there is a strong need for explicit research efforts to make video streaming in all facets friendly to our environment. 

Third Edition of Common Media Application Format (CMAF)

The third edition of CMAF adds two new media profiles for High Efficiency Video Coding (HEVC, ISO/IEC 23008-2, a.k.a. ITU-T H.265), namely for (i) 8K and (ii) High Frame Rate (HFR). Regarding the former, the media profile supporting 8K resolution video encoded with HEVC (Main 10 profile, Main Tier with 10 bits per colour component) has been added to the list of CMAF media profiles for HEVC. The profile will be branded as ‘c8k0’ and will support videos with up to 7680×4320 pixels (8K) and up to 60 frames per second. Regarding the latter, another media profile has been added to the list of CMAF media profiles, branded as ‘c8k1’ and supports HEVC encoded video with up to 8K resolution and up to 120 frames per second. Finally, chroma location indication support has been added to the 3rd edition of CMAF.

Research aspects: basically, CMAF serves two purposes: (i) harmonizing DASH and HLS at the segment format level by adopting the ISOBMFF and (ii) enabling low latency streaming applications by introducing chunks (that are smaller than segments). The third edition support resolutions up to 8K and HFR, which ultimately raises the question of how low latency can be achieved for 8K/HFR applications and services and under which conditions.

New Amendment for Versatile Supplemental Enhancement Information (VSEI) containing Technology for Neural Network-based Post Filtering

At the 139th MPEG meeting, the MPEG Joint Video Experts Team with ITU-T SG 16 (WG 5; JVET) issued a Committee Draft Amendment (CDAM) text for the Versatile Supplemental Enhancement Information (VSEI) standard (ISO/IEC 23002-7, a.k.a. ITU-T H.274). Beyond the Supplemental Enhancement Information (SEI) message for shutter interval indication, which is already known from its specification in Advanced Video Coding (AVC, ISO/IEC 14496-10, a.k.a. ITU-T H.264) and High Efficiency Video Coding (HEVC, ISO/IEC 23008-2, a.k.a. ITU-T H.265), and a new indicator for subsampling phase indication which is relevant for variable-resolution video streaming, this new amendment contains two SEI messages for describing and activating post filters using neural network technology in video bitstreams. This could reduce coding noise, upsampling, colour improvement, or denoising. The description of the neural network architecture itself is based on MPEG’s neural network coding standard (ISO/IEC 15938-17). Results from an exploration experiment have shown that neural network-based post filters can deliver better performance than conventional filtering methods. Processes for invoking these new post-processing filters have already been tested in a software framework and will be made available in an upcoming version of the Versatile Video Coding (VVC, ISO/IEC 23090-3, a.k.a. ITU-T H.266) reference software (ISO/IEC 23090-16, a.k.a. ITU-T H.266.2).

Research aspects: quality enhancements such as reducing coding noise, upsampling, colour improvement, or denoising have been researched quite substantially either with or without neural networks. Enabling such quality enhancements via (V)SEI messages enable system-level support for research and development efforts in this area. For example, integration in video streaming applications or/and conversational services, including performance evaluations.

The latest MPEG-DASH Update

Finally, I’d like to provide a brief update on MPEG-DASH! At the 139th MPEG meeting, MPEG Systems issued a new working draft related to Extended Dependent Random Access Point (EDRAP) streaming and other extensions, which will be further discussed during the Ad-hoc Group (AhG) period (please join the dash email list for further details/announcements). Furthermore, Defects under Investigation (DuI) and Technologies under Consideration (TuC) have been updated. Finally, a new part has been added (ISO/IEC 23009-9), which is called encoder and packager synchronization, for which also a working draft has been produced. Publicly available documents (if any) can be found here.

An updated overview of DASH standards/features can be found in the Figure below.
Research aspects: in the Christian Doppler Laboratory ATHENA we aim to research and develop novel paradigms, approaches, (prototype) tools and evaluation results for the phases (i) multimedia content provisioning (i.e., video coding), (ii) content delivery (i.e., video networking), and (iii) content consumption (i.e., video player incl. ABR and QoE) in the media delivery chain as well as for (iv) end-to-end aspects, with a focus on, but not being limited to, HTTP Adaptive Streaming (HAS). Recent DASH-related publications include "Low Latency Live Streaming Implementation in DASH and HLS" and "Segment Prefetching at the Edge for Adaptive Video Streaming" among others.

The 140th MPEG meeting will be face-to-face in Mainz, Germany, from October 24-28, 2022. Click here for more information about MPEG meetings and their developments.

Monday, August 8, 2022

Doctoral Student Positions in "Adaptive Streaming over HTTP and Emerging Networked Multimedia Services" (2nd cohort)

The Institute of Information Technology at the Alpen-Adria-Universität Klagenfurt invites applications for:

Doctoral Student Positions (100% employment; all genders welcome) 
within the Christian Doppler (CD) Pilot Laboratory ATHENA 
Adaptive Streaming over HTTP and Emerging Networked Multimedia Services” 


at the Faculty of Technical Sciences. The monthly salary for these positions is according to the standard salaries of the Austrian collective agreement, min. € 3.058,60 pre-tax (14x yearly) (Uni-KV: B1, https://www.aau.at/en/uni-kv). The expected start date of employment is April 1st, 2023.

ATHENA (https://athena.itec.aau.at) stands for Adaptive Streaming over HTTP and Emerging Networked Multimedia Services and has been jointly proposed as a CD Laboratory (https://www.cdg.ac.at/) by the Institute of Information Technology (ITEC; https://itec.aau.at) at Alpen-Adria-Universität Klagenfurt (AAU) and Bitmovin GmbH (https://bitmovin.com) to address current and future research and deployment challenges of HTTP Adaptive Streaming (HAS) and emerging streaming methods.

AAU (ITEC) has been working on adaptive video streaming for more than a decade, has a proven record of successful research projects and publications in the field, and has been actively contributing to MPEG standardization for many years, including MPEG-DASH; Bitmovin is a video streaming software company founded by ITEC researchers in 2013 and has developed highly successful, global R&D and sales activities and a world-wide customer base since then.

The aim of ATHENA CD Lab is to research and develop novel paradigms, approaches, (prototype) tools and evaluation results for the areas (1) multimedia content provisioning (i.e., video coding), (2) content delivery (i.e., multimedia networking) and (3) content consumption (i.e., HAS player aspects) in the media delivery chain as well as for (4) end-to-end aspects, with a focus on, but not being limited to, HTTP Adaptive Streaming (HAS). The new approaches and insights enable Bitmovin to build innovative applications and services to account for the steadily increasing and changing multimedia traffic on the Internet. In addition, according to the CD Lab model of “application-oriented basic research,” the goal is to publish the results in international, high-quality professional journals and conference proceedings.

Your profile:
  • Master or diploma degree of Technical Science in the field of Computer Science or Electrical Engineering, completed at a domestic or foreign university (with good final degrees);
  • Interest and experience in one or more of the above-identified areas, namely (1) multimedia content provisioning (i.e., video coding with a special focus on using machine learning)(2) content delivery (i.e., multimedia networking) and (3) content consumption (i.e., HAS player aspects) in the media delivery chain as well as (4) end-to-end aspects (i.e., with a special focus video analytics and data science), with a focus on, but not being limited to, HTTP Adaptive Streaming (HAS);
  • Excellent English skills, both in written and oral form. 
Desirable qualifications include:
  • Excellent programming skills, especially in Python, C, C++, Java, or/and JavaScript;
  • Relevant international and practical work experience;
  • Social and communicative competencies and ability to work in a team;
  • Experience with university teaching and research activities.
The working language and the research program are in English. There is no need to learn German for this position unless the applicant wants to participate in undergraduate teaching, which is optional.

Our offer:

The employment contract is concluded for the position of Project Assistant, and the monthly salary for these positions is according to the standard salaries of the Austrian collective agreement, min. € 3.058,60 pre-tax (14x yearly) (Uni-KV: B1, https://www.aau.at/en/uni-kv). 

The University of Klagenfurt also offers:
  • Personal and professional advanced training courses, management and career coaching
  • Numerous attractive additional benefits; see also https://jobs.aau.at/en/the-university-as-employer/
  • Diversity- and family-friendly university culture
  • The opportunity to live and work in the attractive Alps-Adriatic region with a wide range of leisure activities in the spheres of culture, nature, and sports
The application:

If you are interested in this position, please apply in German or English by providing the following documents:
  • Letter of motivation
  • Curriculum vitae 
  • Copies of degree certificates and confirmations
  • Proof of all completed higher education programs 
  • Concept of a (potential) dissertation project (one-page maximum)
The University of Klagenfurt is aware of its social responsibility even during COVID-19. This is reflected by the high proportion of fully immunized persons among students and employees. For this reason, a continued willingness to be vaccinated in connection with COVID-19 is expected upon entering university employment.

The University of Klagenfurt aims to increase the proportion of women and explicitly invites qualified women to apply for the position. Where the qualification is equivalent, women will receive preferential consideration. 

People with disabilities or chronic diseases, who fulfill the requirements, are particularly encouraged to apply. 

Travel and accommodation costs incurred during the application process will not be refunded.

Submit all relevant documents, including copies of all school certificates and performance records via here (note: we utilize Bitmovin’s recruitment infrastructure for processing applications, but the position will be at the University of Klagenfurt).

Application deadline: October 1st, 2022.

Contact information:
Dr. Christian Timmerer
Institute of Information Technology, Alpen-Adria-Universität Klagenfurt
Universitätsstraße 65 – 67, 9020 Klagenfurt, Austria
Email: christian(dot)timmerer(at)aau(dot)at
URL: http://blog.timmerer.comhttp://itec.aau.at/https://athena.itec.aau.at/

Klagenfurt, situated at the beautiful Lake Wörthersee – one of the largest and warmest alpine lakes in Europe – has nearly 100.000 inhabitants. Being a small city, with a Renaissance-style city center reflecting 800 years of history and with Italian influence, Klagenfurt is a pleasant place to live and work. The university is located only about 1.5 kilometers east of Lake Wörthersee and about 3 kilometers west of the city enter.



Wednesday, August 3, 2022

Predoc Scientist: Distributed (Multimedia) Systems

The University of Klagenfurt, with approximately 1,500 employees and over 12,000 students, is located in the Alps-Adriatic region and consistently achieves excellent rankings. The motto “per aspera ad astra” underscores our firm commitment to the pursuit of excellence in all activities in research, teaching, and university management. The principles of equality, diversity, health, sustainability and compatibility of work and family life serve as the foundation for our work at the university. 

The University of Klagenfurt is pleased to announce the following open position at the Department of Information Technology at the Faculty of Technical Sciences, with an expected starting date of 1 October 2022:

Predoc Scientist (all genders welcome)

Level of employment: 75 % (30 hours/week) 

Minimum salary: € 32,116.- per annum (gross); classification according to collective agreement: B1

Limited to: four years

Depending on the funds available the level of employment can be increased to 100 % during the length of employment. 

Application deadline: 24 August 2022

Reference code: 215_1/22

Tasks and responsibilities:

  • Contributions to research and teaching
  • Independent research with the aim of completing a doctoral thesis
  • Participation in organizational and administrative tasks
  • Participation in public relations activities
  • Assistance in the acquisition and running of third-party funded projects 

Prerequisites for the appointment:

  • Completed Diploma or Master’s studies in the field of Computer Science at a domestic or foreign higher education institution
  • Strong background in one or more of the following fields:
    • Distributed Systems
    • Distributed Multimedia-Systems
    • Machine Learning (Neuronal Network, Deep Learning); preferable in the context of Distributed (Multimedia-) Systems
  • Excellent programming skills
  • Excellent knowledge of English (fluent spoken and written English)

Additional desired qualifications:

  • Communication and presentation skills
  • International and practical experience within their field of activity
  • Teaching experience and didactic competence
  • First relevant publications (disregarding Master’s/Diploma thesis)
  • German language skills (spoken and written German)

Our offer:

The employment contract is concluded for the position as Predoc Scientist Assistant and stipulates a starting salary of € 2,294.- gross per month (14 times a year, level of employment 75 %), respectively € 3,058.60 gross per month (14 times a year, level of employment 100 %); previous experience deemed relevant to the job can be recognised in accordance with the collective agreement. 

The University of Klagenfurt also offers:

  • Personal and professional advanced training courses, management and career coaching
  • Numerous attractive additional benefits, see also https://jobs.aau.at/en/the-university-as-employer/
  • Diversity- and family-friendly university culture
  • The opportunity to live and work in the attractive Alps-Adriatic region with a wide range of leisure activities in the spheres of culture, nature, and sports

The application:

If you are interested in this position, please apply in German or English by providing the following documents:

  • Letter of motivation
  • Curriculum vitae 
  • Copies of degree certificates and confirmations
  • Proof of all completed higher education programmes 
  • Concept of a (potential) dissertation project (one page maximum)

Candidates must furnish proof that they meet the required qualifications by 24 August 2022 at the latest.

This position serves the professional and scientific education of graduates of a Diploma or Master’s programme to complete a doctorate programme in Computer Science. Applications from persons with a relevant doctorate or PhD cannot be considered.

To apply, please select the position with the reference code 215_1/22 in the category “Scientific Staff” using the link “Apply for this position” in the job portal, specifically at https://jobs.aau.at/en/job/predoc-scientist-all-genders-welcome-3/.

The University of Klagenfurt is aware of its social responsibility even in times of COVID-19. This is reflected by the high proportion of fully immunised persons among students and employees. For this reason, a continued willingness to be vaccinated in connection with COVID-19 is expected upon entering into university employment.

For further information on this specific vacancy, please contact Univ.-Prof. DI Dr Radu Prodan (radu.prodan@aau.at) and/or Assoc.-Prof. DI Dr Christian Timmerer (christian.timmerer@aau.at). General information about the university as an employer can be found at https://jobs.aau.at/en/the-university-as-employer/. At the University of Klagenfurt, recruitment and staff matters are accompanied not only by the authority responsible for the recruitment procedure but also by the Equal Opportunities Working Group and, if necessary, by the Representative for Disabled Persons. 

The University of Klagenfurt aims to increase the proportion of women and explicitly invites qualified women to apply for the position. Where the qualification is equivalent, women will receive preferential consideration. 

People with disabilities or chronic diseases, who fulfil the requirements, are particularly encouraged to apply. 

Travel and accommodation costs incurred during the application process will not be refunded.

Translations into other languages serve informational purposes only. Solely the version advertised in the University Bulletin (Mitteilungsblatt) is legally binding.