Friday, March 27, 2026

Sustainability in Video Encoding and Streaming

Sustainability in Video Encoding and Streaming:
Energy-Efficient Techniques and Metrics

Workshop on Media Energy Consumption Measurement and Exposure

Presenter: Christian Timmerer (Alpen-Adria-Universität Klagenfurt)

Abstract: The presentation discusses the increasing environmental impact of video streaming and highlights the urgent need for more sustainable approaches across the entire streaming pipeline. Video traffic dominates internet usage and contributes significantly to global greenhouse gas emissions, while the demand for higher quality content continues to drive up computational complexity and energy consumption in encoding, delivery, and playback.

A central insight is that there is a strong trade-off between video quality and energy consumption, where small reductions in quality can lead to substantial energy savings. By introducing energy as an explicit optimization objective, techniques such as content-aware encoding, energy-aware bitrate ladder construction, and real-time optimization for live streaming can significantly reduce energy usage while maintaining nearly the same perceptual quality.

The work also emphasizes the role of adaptive bitrate algorithms that incorporate energy consumption alongside traditional quality and buffer-based metrics. These approaches demonstrate that it is possible to simultaneously improve user experience and reduce energy consumption, indicating that sustainability and performance can be aligned rather than conflicting goals.

To enable such optimizations, the presentation introduces a range of metrics and models, including video complexity measures, quality prediction models, and machine learning-based approaches for estimating encoding and decoding energy as well as CO₂ emissions. These tools support more informed, data-driven decisions across the full streaming workflow from encoding to playback.

Another important theme is end-to-end optimization, where energy efficiency depends on the combined behavior of encoding strategies, bitrate selection, and client-side adaptation. Industry efforts confirm the practical relevance of these approaches and highlight the importance of collaboration and real-world validation.

Despite promising results, several challenges remain, including difficulties in measuring and benchmarking energy consumption, the lack of standardized methodologies, and the limited integration of energy considerations into existing workflows. Overall, the presentation argues that energy consumption should become a first-class optimization target in video streaming systems, similar to established quality metrics, to enable truly sustainable media delivery.

Keywords: sustainable streaming, energy-aware encoding, adaptive bitrate streaming, green multimedia, video compression, bitrate ladder optimization, QoE optimization, energy-quality tradeoff, video complexity analysis, CO2 footprint, energy modeling, machine learning for video, end-to-end optimization, eco-efficient streaming, real-time streaming optimization

Friday, February 20, 2026

MPEG news: a report from the 153rd meeting

This version of the blog post is also available at ACM SIGMM Records



The 153rd MPEG meeting took place online from January 19-23, 2026. The official MPEG press release can be found here. This report highlights key outcomes from the meeting, with a focus on research directions relevant to the ACM SIGMM community:
  • MPEG Roadmap
  • Exploration on MPEG Gaussian Splat Coding (GSC)
  • MPEG Immersive Video 2nd edition (new white paper)

MPEG Roadmap

MPEG released an updated roadmap showing continued convergence of immersive and “beyond video” media with deployment-ready systems work. Near-term priorities include 6DoF experiences (MPEG Immersive Video v2 and 6DoF audio), volumetric representations (dynamic meshes, solid point clouds, LiDAR, and emerging Gaussian splat coding), and “coding for machines,” which treats visual and audio signals as inputs to downstream analytics rather than only for human consumption.

Research aspects: The most promising research opportunities sit at the intersections: renderer and device-aware rate-distortion-complexity optimization for volumetric content; adaptive streaming and packaging evolution (e.g., MPEG-DASH / CMAF) for interactive 6DoF services under tight latency constraints; and cross-cutting themes such as media authenticity and provenance, green and energy metadata, and exploration threads on neural-network-based compression and compression of neural networks that foreshadow AI-native multimedia pipelines.

MPEG Gaussian Splat Coding (GSC)

Gaussian Splat Coding (GSC) is MPEG’s effort to standardize how 3D Gaussian Splatting content, scenes represented as sparse “Gaussian splats” with geometry plus rich attributes (scale and rotation, opacity, and spherical-harmonics appearance for view-dependent rendering), is encoded, decoded, and evaluated so it can be exchanged and rendered consistently across platforms. The main motivation is interoperability for immersive media pipelines: enabling reproducible results, shared benchmarks, and comparable rate-distortion-complexity trade-offs for use cases spanning telepresence and immersive replay to mobile XR and digital twins, while retaining the visual strengths that made 3DGS attractive compared to heavier neural scene representations.

The work remains in an exploration phase, coordinated across ISO/IEC JTC 1/SC 29 groups WG 4 (MPEG Video Coding) and WG 7 (MPEG Coding for 3D Graphics and Haptics) through Joint Exploration Experiments covering datasets and anchors, new coding tools, software (renderer and metrics), and Common Test Conditions (CTC). A notable systems thread is “lightweight GSC” for resource-constrained devices (single-frame, low-latency tracks using geometry-based and video-based pipelines with explicit time and memory targets), alongside an “early deployment” path via amendments to existing MPEG point-cloud codecs to more natively carry Gaussian-splat parameters. In parallel, MPEG is testing whether splat-specific tools can outperform straightforward mappings in quality, bitrate, and compute for real-time and streaming-centric scenarios.

Research aspects: Relevant SIGMM directions include splat-aware compression tools and rate-distortion-complexity optimization (including tracked vs. non-tracked temporal prediction); QoE evaluation for 6DoF navigation (metrics for view and temporal consistency and splat-specific artifacts); decoder and renderer co-design for real-time and mobile lightweight profiles (progressive and LOD-friendly layouts, GPU-friendly decode); and networked delivery problems such as adaptive streaming, ROI and view-dependent transmission, and loss resilience for splat parameters. Additional opportunities include interoperability work on reproducible benchmarking, conformance testing, and practical packaging and signaling for deployment.

MPEG Immersive Video 2nd edition (white paper)

The second edition of MPEG Immersive Video defines an interoperable bitstream and decoding process for efficient 6DoF immersive scene playback, supporting translational and rotational movement with motion parallax to reduce discomfort often associated with pure 3DoF viewing. The second edition primarily extends functionality (without changing the high-level bitstream structure), adding capabilities such as capture-device information, additional projection types, and support for Simple Multi-Plane Image (MPI), alongside tools that better support geometry and attribute handling and depth-related processing.

Architecturally, MIV ingests multiple (unordered) camera views with geometry (depth and occupancy) and attributes (e.g., texture), then reduces inter-view redundancy by extracting patches and packing them into 2D “atlases” that are compressed using conventional video codecs. MIV-specific metadata signals how to reconstruct views from the atlases. The standard is built as an extension of the common Visual Volumetric Video-based Coding (V3C) bitstream framework shared with V-PCC, with profiles that preserve backward compatibility while introducing a new profile for added second-edition functionality and a tailored profile for full-plane MPI delivery.

Research aspects: Key SIGMM topics include systems-efficient 6DoF delivery (better view and patch selection and atlas packing under latency and bandwidth constraints); rate-distortion-complexity-QoE optimization that accounts for decode and render cost (especially on HMD and mobile) and motion-parallax comfort; adaptive delivery strategies (representation ladders, viewport and pose-driven bit allocation, robust packetization and error resilience for atlas video plus metadata); renderer-aware metrics and subjective protocols for multi-view temporal consistency; and deployment-oriented work such as profile and level tuning, codec-group choices (HEVC / VVC), conformance testing, and exploiting second-edition features (capture device info, depth tools, Simple MPI) for more reliable reconstruction and improved user experience.

Concluding Remarks

The meeting outcomes highlight a clear shift toward immersive and AI-enabled media systems where compression, rendering, delivery, and evaluation must be co-designed. These developments offer timely opportunities for the ACM SIGMM community to contribute reproducible benchmarks, perceptual metrics, and end-to-end streaming and systems research that can directly influence emerging standards and deployments.

The 154th MPEG meeting will be held in Santa Eulària, Spain, from April 27 to May 1, 2026. Click here for more information about MPEG meetings and ongoing developments.

Wednesday, February 18, 2026

Professor of Information Systems Engineering (all genders welcome)

Department of Informatics Systems  

Full professorships  | Full time

Application deadline:  22 March 2026

Reference code: 43/02-PERS/26

URL: https://jobs.aau.at/en/job/professor-of-information-systems-engineering-all-genders-welcome/

Announcement

The University of Klagenfurt wants to attract more women for professorships.

We are pleased to announce the following open position at the Department of Informatics Systems, Faculty of Technical Sciences, in compliance with the provisions of § 98 (permanent) or § 98 (fixed-term, max. 6 years) of the Austrian Universities Act:

Professor of Information Systems Engineering (all genders welcome)

This is a full-time position available from 1 October 2027. Depending on the candidate’s academic credentials, the employment contract can be concluded either as a permanent employment contract or as a fixed-term employment contract with the option of a permanent extension. The duration of fixed-term contracts is subject to negotiation.

With approximately 13,000 students, the University of Klagenfurt is a young, vibrant and innovative university, located at the intersection of Alpine and Mediterranean culture in an area that offers exceptionally high quality of life. As a public university pursuant to § 6 of the Austrian Universities Act, it receives federal funding. The university operates under the motto “Beyond Boundaries!”.

In accordance with its key strategic road map, the Development Plan, the university’s primary guiding principles and objectives include the pursuit of scientific excellence regarding the appointment of professors, favourable research conditions, a good faculty-student ratio, and the promotion of the development of early career researchers.

Information Systems Engineering focuses on the design, development, and management of large systems that connect people, data, and technology to support organizational goals. It combines principles of software engineering, data management, business processes, and emerging digital technologies to create solutions that enhance decision-making, optimize operations, and drive innovation.

We welcome applications addressing the engineering of Information Systems, in particular those focusing on designing, modelling, executing, verifying, and optimizing business processes. We are looking for a highly qualified and internationally visible scientist with high engagement in developing and sustaining an ambitious and innovative research and teaching programme. Candidates should also be interested in developing collaborations in the university’s Areas of Research Strength: Digitalisation and Health, Multiple Perspectives in Optimization, Networked and Autonomous Systems and/or the Cluster of Excellence “Bilateral AI”.

Your responsibilities – what awaits you

The duties of the position include:

  • Representing the field of Information Systems Engineering in research and teaching
  • Teaching in relevant degree programmes at Bachelor’s, Master’s, and Doctoral level both in English and German, as well as supervision of student projects and academic theses
  • Advising and mentoring of students and early career researchers
  • Competitive research grant acquisition and management
  • Collaboration with academic and industry partners
  • Participation in university management
  • Participation in third mission and public relations activities

Your profile

  • Habilitation or equivalent qualification in Computer Science or a relevant neighbouring field
  • Excellent research track record in Information Systems Engineering
  • Experience in the acquisition and management of competitive third-party funded research projects of a relevant volume
  • Teaching competence and experience at university level
  • Experience in the (co-)supervision of academic theses
  • Fluency in English
This distinguishes you additionally
  • Interdisciplinary experience
  • Scientific dissemination skills
  • Engagement in academic administrative duties
  • Competence in leadership and teamwork
  • Competence in gender mainstreaming and diversity management
German language skills are not a formal prerequisite, but proficiency at level B2 is expected within two years.

Why you will enjoy working with us

The salary is subject to negotiation. The minimum gross salary for the position at this level (salary group A1 for University Staff according to the Austrian Universities’ Collective Bargaining Agreement) is currently € 93,986 per year.

The university is committed to increasing the number of women among the faculty, particularly in high-level positions, and therefore specifically invites applications from qualified women. Among equally qualified candidates, women will receive preferential consideration.

People with disabilities or chronic diseases who meet the qualification criteria are explicitly invited to apply.

In accordance with the Austrian Income Tax Act, an attractive relocation tax allowance can be granted for the first five years in the case of appointments to professorships in Austria. The prerequisites are subject to examination on a case by case basis.

Please submit your application in English by e-mail to the University of Klagenfurt, Office of the Senate, attn. Mag.a (FH) Sabine Seebacher via application_professorship@aau.at no later than 22 March, 2026, including:

  • a mandatory principal part not exceeding five pages (https://jobs.aau.at/wp-content/uploads/specimen_main_part_application_professorship.docx). The submission of the mandatory principal part constitutes a necessary condition for the validity of your application.
  • one single PDF including:
    • a letter of motivation
    • a detailed scientific CV
    • a comprehensive list of publications, talks, and of all courses taught
    • a list of projects that you acquired as a PI or co-PI, including the amount of funding that was attributed to you
    • a research statement
    • a teaching statement
    • supplementary documents where applicable (e.g., course evaluations)
    • links to publicly available versions of your three most important publications within the scope of this professorship

For general information, please refer to the general information provided at https://jobs.aau.at/en/the-university-as-employer/. For specific information about the position, please contact Prof. Dr. Martin Pinzger (Tel.: +43 463 2700 3513; martin.pinzger@aau.at).

Friday, November 28, 2025

MPEG news: a report from the 152nd meeting

 This version of the blog post is also available at ACM SIGMM Records


The 152nd MPEG meeting took place in Geneva, Switzerland, from October 7 to October 11, 2025. The official MPEG press release can be found here. This column highlights key points from the meeting, amended with research aspects relevant to the ACM SIGMM community:

  • MPEG Systems received an Emmy® Award for the Common Media Application Format (CMAF). A separate press release regarding this achievement is available here.
  • JVET ratified new editions of VSEI, VVC, and HEVC
  • The fourth edition of Visual Volumetric Video-based Coding (V3C and V-PCC) has been finalized
  • Responses to the call for evidence on video compression with capability beyond VVC successfully evaluated

MPEG Systems received an Emmy® Award for the Common Media Application Format (CMAF)

On September 18, 2025, the National Academy of Television Arts & Sciences (NATAS) announced that the MPEG Systems Working Group (ISO/IEC JTC 1/SC 29/WG 3) had been selected as a recipient of a Technology & Engineering Emmy® Award for standardizing the Common Media Application Format (CMAF). But what is CMAF? CMAF (ISO/IEC 23000-19) is a media format standard designed to simplify and unify video streaming workflows across different delivery protocols and devices. Here’s a structured overview. Before CMAF, streaming services often had to produce multiple container formats, i.e., (i) ISO Base Media File Format (ISOBMFF) for MPEG-DASH and MPEG-2 Transport Stream (TS) for Apple HLS. This duplication resulted in additional encoding, packaging, and storage costs. I wrote a blog post about this some time ago here. CMAF’s main goal is to define a single, standardized segmented media format usable by both HLS and DASH, enabling “encode once, package once, deliver everywhere.”

The core concept of CMAF is that it is based on ISOBMFF, the foundation for MP4. Each CMAF stream consists of a CMAF header, CMAF media segments, and CMAF track files (a logical sequence of segments for one stream, e.g., video or audio). CMAF enables low-latency streaming by allowing progressive segment transfer, adopting chunked transfer encoding via CMAF chunks. CMAF defines interoperable profiles for codecs and presentation types for video, audio, and subtitles. Thanks to its compatibility with and adoption within existing streaming standards, CMAF bridges the gaps between DASH and HLS, creating a unified ecosystem.

Research aspects include – but are not limited to – low-latency tuning (segment/chunk size trade-offs, HTTP/3, QUIC), Quality of Experience (QoE) impact of chunk-based adaptation, synchronization of live and interactive CMAF streams, edge-assisted CMAF caching and prediction, and interoperability testing and compliance tools.

JVET ratified new editions of VSEI, VVC, and HEVC

At its 40th meeting, the Joint Video Experts Team (JVET, ISO/IEC JTC 1/SC 29/WG 5) concluded the standardization work on the next editions of three key video coding standards, advancing them to the Final Draft International Standard (FDIS) stage. Corresponding twin-text versions have also been submitted to ITU-T for consent procedures. The finalized standards include:

  • Versatile Supplemental Enhancement Information (VSEI) — ISO/IEC 23002-7 | ITU-T Rec. H.274
  • Versatile Video Coding (VVC) — ISO/IEC 23090-3 | ITU-T Rec. H.266
  • High Efficiency Video Coding (HEVC) — ISO/IEC 23008-2 | ITU-T Rec. H.265

The primary focus of these new editions is the extension and refinement of Supplemental Enhancement Information (SEI) messages, which provide metadata and auxiliary data to support advanced processing, interpretation, and quality management of coded video streams.

The updated VSEI specification introduces both new and refined SEI message types supporting advanced use cases:

  • AI-driven processing: Extensions for neural-network-based post-filtering and film grain synthesis offer standardized signalling for machine learning components in decoding and rendering pipelines.
  • Semantic and multimodal content: New SEI messages describe infrared, X-ray, and other modality indicators, region packing, and object mask encoding; creating interoperability points for multimodal fusion and object-aware compression research.
  • Pipeline optimization: Messages defining processing order and post-processing nesting support research on joint encoder-decoder optimization and edge-cloud coordination in streaming architectures.
  • Authenticity and generative media: A new set of messages supports digital signature embedding and generative-AI-based face encoding, raising questions for the SIGMM community about trust, authenticity, and ethical AI in media pipelines.
  • Metadata and interpretability: New SEIs for text description, image format metadata, and AI usage restriction requests could facilitate research into explainable media, human-AI interaction, and regulatory compliance in multimedia systems.

All VSEI features are fully compatible with the new VVC edition, and most are also supported in HEVC. The new HEVC edition further refines its multi-view profiles, enabling more robust 3D and immersive video use cases.

Research aspects of these new standard’s editions can be summarized as follows: (i) Define new standardized interfaces between neural post-processing and conventional video coding, fostering reproducible and interoperable research on learned enhancement models. (ii) Encourage exploration of metadata-driven adaptation and QoE optimization using SEI-based signals in streaming systems. (iii) Open possibilities for cross-layer system research, connecting compression, transport, and AI-based decision layers. (iv) Introduce a formal foundation for authenticity verification, content provenance, and AI-generated media signalling, relevant to current debates on trustworthy multimedia.

These updates highlight how ongoing MPEG/ITU standardization is evolving toward a more AI-aware, multimodal, and semantically rich media ecosystem, providing fertile ground for experimental and applied research in multimedia systems, coding, and intelligent media delivery.

The fourth edition of Visual Volumetric Video-based Coding (V3C and V-PCC) has been finalized

MPEG Coding of 3D Graphics and Haptics (ISO/IEC JTC 1/SC 29/WG7) has advanced MPEG-I Part 5 – Visual Volumetric Video-based Coding (V3C and V-PCC) to the Final Draft International Standard (FDIS) stage, marking its fourth edition. This revision introduces major updates to the Video-based Coding of Volumetric Content (V3C) framework, particularly enabling support for an additional bitstream instance: V-DMC (Video-based Dynamic Mesh Compression).

Previously, V3C served as the structural foundation for V-PCC (Video-based Point Cloud Compression) and MIV (MPEG Immersive Video). The new edition extends this flexibility by allowing V-DMC integration, reinforcing V3C as a generic, extensible framework for volumetric and 3D video coding. All instances follow a shared principle, i.e., using conventional 2D video codecs (e.g., HEVC, VVC) for projection-based compression, complemented by specialized tools for mapping, geometry, and metadata handling.

While V-PCC remains co-specified within Part 5, MIV (Part 12) and V-DMC (Part 29) are standardized separately. The progression to FDIS confirms the technical maturity and architectural stability of the framework.

This evolution opens new research directions as follows: (i) Unified 3D content representation, enabling comparative evaluation of point cloud, mesh, and view-based methods under one coding architecture. (ii) Efficient use of 2D codecs for 3D media, raising questions on mapping optimization, distortion modeling, and geometry-texture compression. (iii) Dynamic and interactive volumetric streaming, relevant to AR/VR, telepresence, and immersive communication research.

The fourth edition of MPEG-I Part 5 thus positions V3C as a cornerstone for future volumetric, AI-assisted, and immersive video systems, bridging standardization and cutting-edge multimedia research.

Responses to the call for evidence on video compression with capability beyond VVC successfully evaluated

The Joint Video Experts Team (JVET, ISO/IEC JTC 1/SC 29/WG 5) has completed the evaluation of submissions to its Call for Evidence (CfE) on video compression with capability beyond VVC. The CfE investigated coding technologies that may surpass the performance of the current Versatile Video Coding (VVC) standard in compression efficiency, computational complexity, and extended functionality.

A total of five submissions were assessed, complemented by ECM16 reference encodings and VTM anchor sequences with multiple runtime variants. The evaluation addressed both compression capability and encoding runtime, as well as low-latency and error-resilience features. All technologies were derived from VTM, ECM, or NNVC frameworks, featuring modified encoder configurations and coding tools rather than entirely new architectures.

Key Findings

  • In the compression capability test, 76 out of 120 test cases showed at least one submission with a non-overlapping confidence interval compared to the VTM anchor. Several methods outperformed ECM16 in visual quality and achieved notable compression gains at lower complexity. Neural-network-based approaches demonstrated clear perceptual improvements, particularly for 8K HDR content, while gains were smaller for gaming scenarios.
  • In the encoding runtime test, significant improvements were observed even under strict complexity constraints: 37 of 60 test points (at both 1× and 0.2× runtime) showed statistically significant benefits over VTM. Some submissions achieved faster encoding than VTM, with only a 35% increase in decoder runtime.

Research Relevance and Outlook

The CfE results illustrate a maturing convergence between model-based and data-driven video coding, raising research questions highly relevant for the ACM SIGMM community:

  • How can learned prediction and filtering networks be integrated into standard codecs while preserving interoperability and runtime control?
  • What methodologies can best evaluate perceptual quality beyond PSNR, especially for HDR and immersive content?
  • How can complexity-quality trade-offs be optimized for diverse hardware and latency requirements?

Building on these outcomes, JVET is preparing a Call for Proposals (CfP) for the next-generation video coding standard, with a draft planned for early 2026 and evaluation through 2027. Upcoming activities include refining test material, adding Reference Picture Resampling (RPR), and forming a new ad hoc group on hardware implementation complexity.

For multimedia researchers, this CfE marks a pivotal step toward AI-assisted, complexity-adaptive, and perceptually optimized compression systems, which are considered a key frontier where codec standardization meets intelligent multimedia research.


The 153rd MPEG meeting will be held online from January 19 to January 23, 2026. Click here for more information about MPEG meetings and their developments.

Tuesday, October 14, 2025

Happy World Standards Day 2025!

Celebrating innovation, interoperability, and collaboration through international standards.

Every year on October 14, we celebrate World Standards Day — honoring the collective efforts of experts and organizations worldwide who develop and maintain the standards that make modern digital life possible. For the Moving Picture Experts Group (MPEG), this day marks decades of work in defining the technologies that power media, streaming, and immersive experiences worldwide.

A Year of Progress and New Milestones

Over the past year, MPEG and its working groups achieved remarkable progress across video, audio, systems, and AI-driven technologies — advancing the future of multimedia communication. Hot off the press, MPEG is proud to announce another Emmy® Technology & Engineering Award — this time for the Common Media Application Format (CMAF; ISO/IEC 23000-19), a landmark standard that brought long-awaited harmonization between DASH and HLS streaming formats (among others).

Next Generation Video Coding Beyond VVC

The Joint Video Experts Team (JVET), a joint effort of ISO/IEC and ITU-T, launched a Call for Evidence exploring technologies that go beyond Versatile Video Coding (VVC).

The goal: to identify breakthroughs that significantly improve compression efficiency, runtime performance, and functionality — from HDR and 8K video to gaming and user-generated content. Depending on the results, a Call for Proposals (CfP) for the next generation of video coding may follow in 2026, opening the door to AI-enhanced compression.

The current plan foresees a draft CfP in January 2026, followed by the final CfP in July 2026 and submissions in November 2026, with evaluations scheduled for January 2027. The first version of the resulting standard is expected to be finalized within three years thereafter.

MPEG-DASH (Sixth Edition)

Adaptive streaming continues to evolve, and the sixth edition of MPEG-DASH (ISO/IEC 23009-1) marks a major step forward. New features include enhanced low-latency streaming, content steering across multiple CDNs, compact signaling for faster playback, and even support for interactive storylines — enabling richer, more dynamic media experiences. MPEG-DASH remains the foundation of scalable, interoperable video streaming used by billions of devices worldwide.

AI and Machine-Oriented Coding

MPEG’s vision for Audio and Video Coding for Machines continues to take shape. The updated Call for Proposals on Audio Coding for Machines (ACoM) invites technologies for efficiently compressing audio and multi-dimensional signals — not only for human listening but also for machine learning and AI-driven analysis. In parallel, Video Coding for Machines (VCM) is being standardized to optimize visual data for computer vision and autonomous systems, reducing bitrate while preserving task-relevant features.

Open Font Format (Fifth Edition)

MPEG Systems (WG 3) reached the Final Draft International Standard (FDIS) stage for the fifth edition of the Open Font Format (ISO/IEC 14496-22). This major update removes previous technical constraints, supporting over 64K glyphs and the entire Unicode range in a single file — a leap toward more inclusive digital typography across languages and writing systems.

3D and Volumetric Media Innovation

From Video-Based Dynamic Mesh Coding (V-DMC) to Low Latency Point Cloud Compression (L3C2), MPEG advanced two pivotal 3D graphics standards to final draft status. These technologies support real-time 3D content — from immersive AR/VR experiences to LiDAR-based perception in autonomous vehicles — enabling efficient, low-latency, and interoperable volumetric media.

Ensuring Media Authenticity

New amendments to MPEG Audio standards introduce mechanisms for Media Authenticity, allowing verification of content integrity and provenance across audio, video, and system layers. This step is essential for a trustworthy digital media ecosystem.

Genomics and AI Meet Multimedia

MPEG also looked beyond traditional media: the MPEG-G Genomics Hackathon, co-organized with partners such as Stanford Medicine, Philips, and Fudan University, challenges researchers to apply AI to microbiome data encoded in MPEG-G format. The goal: uncover new biomedical insights through standard-based, interoperable data compression.

Looking Ahead

From next-generation video compression and AI-enhanced codecs to trustworthy media and adaptive streaming, MPEG continues to define the building blocks of interoperable multimedia. As new technologies reshape how we experience and analyze content, standards ensure that innovation remains open, efficient, and globally accessible.

On this World Standards Day, we celebrate the dedication of all MPEG experts and contributors for shaping a smarter, more connected multimedia future.

Learn more at www.mpeg.org and stay tuned for updates from the next MPEG meeting in early 2026.

Wednesday, July 16, 2025

Full Professor of Virtual and Augmented Reality (all genders welcome)

The official and legally binding job description is available here.

The University of Klagenfurt wants to attract more qualified women for professorships.

The University of Klagenfurt is pleased to announce the following open position in the Department of Information Technology (ITEC) within the Faculty of Technical Sciences, in compliance with the provisions of Art. 98 (open-ended) or Art. 99 (limited to 5 years) of the Austrian Universities Act:

Full Professor of Virtual and Augmented Reality (all genders welcome)

This is a full-time position. Whether the position will be implemented in compliance with the provisions of Art. 98 Austrian Universities Act (open-ended) or Art. 99 of the Austrian Universities Act (limited to 5 years) will be decided in the course of the appointment procedure.

The University of Klagenfurt is a young, vibrant, and innovative university, located at the intersection of Alpine and Mediterranean culture in an area that offers an exceptionally high quality of life. As a public university pursuant to Art. 6 of the Austrian Universities Act, it receives federal funding. The Times Higher Education (THE) Young University Rankings 2021 ranked it among the 50 best young universities in the world. The university operates under the motto “Beyond Boundaries!”.

In accordance with its key strategic road map, the development plan, the university’s primary guiding principles and objectives include the pursuit of scientific excellence regarding the appointment of professors, favourable research conditions, a good faculty-student ratio, and the promotion of the development of young scientists.

The professorship will be embedded in the Department of Information Technology (ITEC; https://itec.aau.at/) within the Faculty of Technical Sciences (https://www.aau.at/en/tewi), which focuses on distributed multimedia systems, including multimedia coding, transmission, and quality of experience, AI-based multimedia analysis, game studies and engineering, as well as distributed cloud and edge computing. The department and faculty provide a vivid, friendly, and research-oriented environment. We are looking for a highly qualified and internationally recognized scientist with high engagement in developing and sustaining an ambitious and innovative research programme.

Virtual and Augmented Reality (VR/AR) are broad research fields addressing both theoretical and application-driven questions. This position offers an opportunity to focus on cutting-edge VR/AR research areas including – but not limited to – immersive media (e.g., 360° videos, 3D point clouds), AI for object recognition in VR/AR (e.g., in industry and medicine), educational and training applications, computer graphics, sensor technology, human-computer interaction, and efficient multimedia data transmission and cloud/edge processing.

The professor will be involved in teaching in a variety of degree programmes, including the Bachelor’s programmes “Applied Informatics” and “Robotics and Artificial Intelligence”, and the international Master’s programmes “Informatics” and “Game Studies and Engineering”.

The duties of the position include:

  • Representing the field of Virtual and Augmented Reality in research and teaching
  • Acquiring and managing competitive research funding
  • Collaborating with colleagues across the university and with industry partners
  • Teaching in relevant Bachelor’s, Master’s, and Doctoral programmes
  • Advising and mentoring students and early career researchers
  • Contributing to the long-term development of the department and its international standing
  • Advancing the department’s and faculty’s research priorities, with a commitment to interdisciplinary collaboration
  • Contributing to university governance and academic self-administration
  • Engaging in third mission activities and public outreach

Required qualifications:
  • Habilitation or equivalent qualification in a relevant field
  • Excellent research standing and publication record in Virtual and/or Augmented Reality, including theoretical and technical foundations
  • Experience in the acquisition of competitive third-party funded research projects of a relevant volume
  • Teaching experience at university level and didactic competence
  • Experience in the (co-)supervision of academic theses
  • Collaboration and social skills
  • Fluency in English
Desired qualifications:
  • Excellent scientific communication and dissemination skills
  • Interdisciplinary experience
  • Experience with academic management duties
  • Competence in leadership and management of teams
  • Competence in gender mainstreaming and diversity management
  • Fluency in German
German language skills are not a formal prerequisite, but proficiency at level B2 is expected within two years. The remit of the professorship requires that the successful candidate will establish Klagenfurt as primary place of work.

The university is committed to increasing the number of women among the faculty, particularly in high-level positions, and therefore specifically invites applications from qualified women. Among equally qualified candidates, women will receive preferential consideration. People with disabilities or chronic diseases who meet the qualification criteria are explicitly invited to apply.

The salary is subject to negotiation. The minimum gross salary for the position at this level (salary group A1 for faculty according to the Austrian Universities’ Collective Bargaining Agreement) is currently € 92,500 per year.

In accordance with the Austrian Income Tax Act an attractive relocation tax allowance can be granted for the first five years in the case of appointments to professorships in Austria. The prerequisites are subject to examination on a case-by-case basis.

Please submit your application in English by e-mail to the University of Klagenfurt, Office of the Senate, attn. Mag.a (FH) Sabine Seebacher via application_professorship@aau.at no later than September 28, 2025, including:
  • a mandatory principal part not exceeding five pages https://jobs.aau.at/wp-content/uploads/specimen_main_part_application_professorship.doc). The submission of the mandatory principal part mentioned above constitutes a necessary condition for the validity of your application.
  • one single PDF including:
    • a letter of motivation
    • a detailed scientific CV
    • a comprehensive list of publications, talks, and courses taught
    • a list of acquired third-party funded research projects, including role, funding organization, and amount of funding (in case of funding acquired within a consortium, please specify the amount attributed to you)
    • a compact research statement of up to two pages
    • supplementary documents, where applicable (e.g., course evaluations)
    • links to publicly available versions of your three most important publications within the scope of this professorship
For general information, please refer to the general information on our website provided at https://jobs.aau.at/en/the-university-as-employer/. For specific information about the position, please contact Prof. Dr. Christian Timmerer (christian.timmerer@aau.at).

Wednesday, June 18, 2025

Up to 4 Predoc Scientist Positions (all genders welcome)

The University of Klagenfurt, with approximately 1,700 employees and over 13,000 students, is located in the Alps-Adriatic region and consistently achieves excellent placements in rankings. The motto “per aspera ad astra” underscores our firm commitment to the pursuit of excellence in all activities in research, teaching, and university management. The principles of equality, diversity, health, sustainability, and compatibility of work and family life serve as the foundation for our work at the university.

The University of Klagenfurt is in the process of establishing a Karl Popper Kolleg (graduate school) entitled “FruitScope: A DroneScope for Smart Agriculture”. The following positions are open for applicants at this school with an anticipated starting date of October 1, 2025:

Up to 4 Predoc Scientist Positions (all genders welcome)

  • Level of employment: 75 % (30 hours per week) each
  • Minimum salary: € 39,005.40 per annum (gross); classification according to collective bargaining agreement: B1
  • Limited to: 3 years
  • Application deadline: August 20, 2025
  • Reference code: 338/25

Tasks and responsibilities:

  • Independent research and scientific qualification within the Karl Popper Kolleg FruitScope with the aim to acquire the Doctoral Degree in Technical Sciences
  • Peer-reviewed publication of scientific results in journals and at conferences
  • Team work and student mentoring
  • Active participation in public relations activities

This graduate school seeks to push the current bounds of state-of-the-art in navigation, coordination, sensing, and communication of multi agent unmanned aerial vehicles (UAVs). The groups of the involved faculty publish in international top journals and conference proceedings. Successful applicants will be encouraged and supported to publish and present their work in such journals and proceedings and will have the opportunity to cooperate with our world-renowned international partners in science and industry. We currently cooperate with partners worldwide, mainly in the USA/Canada and Europe. We specifically encourage close and open collaboration with our peers both internationally and at the University and support international exchanges with the universities and research institutions affiliated to the graduate school (e.g., ETH Zurich, MIT, CMU, NASA, UofT, U-Mich, UPenn, Georgia Tech). Our young research groups provide a dynamic, familiar, and friendly attitude and thus a collaborative and inspiring work environment with very modern infrastructure (e.g., one of the largest indoor drone halls in Europe), which is continuously updated and upgraded (e.g., soon, with one of the largest outdoor drone test fields in the world).

Prerequisites for the appointment:

  • Completed Master’s or Diploma degree in electrical engineering, information and communication engineering, mechanical engineering, computer science or related fields. This requirement has an extended deadline and must be fulfilled two weeks before the starting date at the latest; hence, the last possible deadline for meeting this requirement is September 17, 2025.
  • Proven knowledge and experience in at least one of the following areas: mobile robotics, wireless communications or sensing, multimedia communication, signal processing for communications, or machine learning
  • Proven programming skills in at least one of the following languages: Matlab, C/C++, Java, Python, ROS or similar
  • Fluency in English (both written and spoken)

Additional desired qualifications:

  • Good knowledge of cooperative software development (e.g., with GIT)
  • First scientific publication (apart from Master’s or Diploma thesis) in the area of mobile robotics, wireless sensing, or multimedia communication technology
  • Relevant international or practical experience
  • Good scientific communication and presentation skills
  • German language skills or willingness to acquire German language skills within the first two years of service
  • Social skills and ability to work independently

Our offer:

The employment contract is concluded for the position as predoc scientist and stipulates a starting salary of € 2,786.10 gross per month (14 times a year; previous experience deemed relevant to the job can be recognized).

The University of Klagenfurt also offers:

  • Personal and professional advanced training courses, management and career coaching, including bespoke training for women in science
  • Numerous attractive additional benefits, see also https://jobs.aau.at/en/the-university-as-employer/
  • Diversity- and family-friendly university culture
  • The opportunity to live and work in the attractive Alps-Adriatic region with a wide range of leisure activities in the spheres of culture, nature and sports

The application:

If you are interested in this position, please apply in English providing the following documents:

  • Letter of application explaining the motivation and including a statement of interest in research (indicating an idea for the research for your own doctoral degree)
  • Curriculum vitae (please do not include a photo)
  • Copies of degree certificates (Bachelor and Master)
  • Copies of official transcripts (Bachelor and Master) containing a list of all courses and grades
  • Master’s thesis. If the thesis is not available, the candidate should provide a draft or an explanation.
  • If an applicant has not received the Master’s degree by the application deadline, the applicant should provide a declaration, written either by a supervisor or by the candidate themselves, on the feasibility of finishing the Master’s degree before September 17, 2025.

To apply, please select the position with the reference code 338/25 in the category “Scientific Staff” using the link “Apply for this position” in the job portal at https://jobs.aau.at/en/.

Candidates must provide proof that they meet the required qualifications by August 20, 2025, at the latest. However, candidates who fulfil the required qualifications but do not yet possess the required Master’s degree can apply, provided they are able to meet this requirement at least two weeks before the starting date. Therefore, the latest possible deadline for meeting this requirement is September 17, 2025.

General information about the university as an employer can be found at https://jobs.aau.at/en/the-university-as-employer/. At the University of Klagenfurt, recruitment and staff matters are accompanied not only by the authority responsible for the recruitment procedure but also by the Equal Opportunities Working Group and, if applicable, by the Representative for Disabled Persons.

For further information on this specific vacancy, please contact:

The University of Klagenfurt aims to increase the proportion of women and therefore specifically invites qualified women to apply for the position. Where the qualification is equivalent, women will be given preferential consideration.

People with disabilities or chronic diseases, who fulfil the requirements, are particularly encouraged to apply. Travel and  accommodation costs incurred during the application process will not be refunded. Under exceptional circumstances online hearings may be possible. Translations into other languages serve informational purposes only. Solely the version advertised in the University Bulletin (Mitteilungsblatt) shall be legally binding.