Wednesday, November 27, 2019

MPEG news: a report from the 128th meeting, Geneva, Switzerland

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects. Additionally, this version of the blog post will be also posted at ACM SIGMM Records.

The 128th MPEG meeting concluded on October 11, 2019 in Geneva, Switzerland with the following topics:
  • Low Complexity Enhancement Video Coding (LCEVC) Promoted to Committee Draft
  • 2nd Edition of Omnidirectional Media Format (OMAF) has reached the first milestone
  • Genomic Information Representation – Part 4 Reference Software and Part 5 Conformance Promoted to Draft International Standard
The corresponding press release of the 128th MPEG meeting can be found here: https://mpeg.chiariglione.org/meetings/128.
In this report we will focus on video coding aspects (i.e., LCEVC) and immersive media applications (i.e., OMAF). At the end, we will provide an update related to adaptive streaming (i.e., DASH and CMAF).

Low Complexity Enhancement Video Coding

Low Complexity Enhancement Video Coding (LCEVC) has been promoted to committee draft (CD) which is the first milestone in the ISO/IEC standardization process. LCEVC is part two of MPEG-5 or ISO/IEC 23094-2 if you prefer the always easy-to-remember ISO codes. We introduced MPEG-5 already in previous posts and LCEVC is about a standardized video coding solution that leverages other video codecs in a manner that improves video compression efficiency while maintaining or lowering the overall encoding and decoding complexity.
The LCEVC standard uses a lightweight video codec to add up to two layers of encoded residuals. The aim of these layers is correcting artefacts produced by the base video codec and adding detail and sharpness for the final output video.
The target of this standard comprises software or hardware codecs with extra processing capabilities, e.g., mobile devices, set top boxes (STBs), and personal computer based decoders. Additional benefits are the reduction in implementation complexity or a corresponding expansion in spatial resolution.
LCEVC is based on existing codecs which allows for backwards-compatibility with existing deployments. Supporting LCEVC enables “softwareized” video coding allowing for release and deployment options known from software-based solutions which are well understood by software companies and, thus, opens new opportunities in improving and optimizing video-based services and applications.
Research aspects: in video coding, research efforts are mainly related to coding efficiency and complexity (as usual). However, as MPEG-5 basically adds a software layer on top of what is typically implemented in hardware, all kind of aspects related to software engineering could become an active area of research.

Omnidirectional Media Format

The scope of the Omnidirectional Media Format (OMAF) is about 360° video, images, audio and associated timed text and specifies (i) a coordinate system, (ii) projection and rectangular region-wise packing methods, (iii) storage of omnidirectional media and the associated metadata using ISOBMFF, (iv) encapsulation, signaling and streaming of omnidirectional media in DASH and MMT, and (v) media profiles and presentation profiles.
At this meeting, the second edition of OMAF (ISO/IEC 23090-2) has been promoted to committee draft (CD) which includes
  • support of improved overlay of graphics or textual data on top of video,
  • efficient signaling of videos structured in multiple sub parts,
  • enabling more than one viewpoint, and
  • new profiles supporting dynamic bitstream generation according to the viewport.
As for the first edition, OMAF includes encapsulation and signaling in ISOBMFF as well as streaming of omnidirectional media (DASH and MMT). It will reach its final milestone by the end of 2020.
360° video is certainly a vital use case towards a fully immersive media experience. Devices to capture and consume such content are becoming increasingly available and will probably contribute to the dissemination of this type of content. However, it is also understood that the complexity increases significantly, specifically with respect to large-scale, scalable deployments due to increased content volume/complexity, timing constraints (latency), and quality of experience issues.
Research aspects: understanding the increased complexity of 360° video or immersive media in general is certainly an important aspect to be addressed towards enabling applications and services in this domain. We may even start thinking that 360° video actually works (e.g., it's possible to capture, upload to YouTube and consume it on many devices) but the devil is in the detail in order to handle this complexity in an efficient way to enable seamless and high quality of experience.

DASH and CMAF

The 4th edition of DASH (ISO/IEC 23009-1) will be published soon and MPEG is currently working towards a first amendment which will be about (i) CMAF support and (ii) event processing model. An overview of all DASH standards is depicted in the figure below, notably part one of MPEG-DASH referred to as media presentation description and segment formats.
The 2nd edition of the CMAF standard (ISO/IEC 23000-19) will become available very soon and MPEG is currently reviewing additional tools in the so-called technologies under considerations document as well as conducting various explorations. A working draft for additional media profiles is also under preparation.
Research aspects: with CMAF, low-latency supported is added to DASH-like applications and services. However, the implementation specifics are actually not defined in the standard and subject to competition (e.g., here). Interestingly, the Bitmovin video developer reports from both 2018 and 2019 highlight the need for low-latency solutions in this domain.
At the ACM Multimedia Conference 2019 in Nice, France I gave a tutorial entitled “A Journey towards Fully Immersive Media Access” which includes updates related to DASH and CMAF. The slides are available here.

Outlook 2020

Finally, let me try giving an outlook for 2020, not so much content-wise but events planned for 2020 that are highly relevant for this column:
... and many more!

Wednesday, November 6, 2019

IEEE ICME 2020 – Industry papers and demos

IEEE ICME 2020 – Industry papers and demos

IEEE ICME annually attracts a truly global audience, with more than 500 attendees from academia and industry. During ICME 2020 we are expecting similar or higher levels of interest. Research on the covered topics is flourishing, and London’s rich academic and technological innovation scene is expected to attract large number of participants from all over the world.

The expo (industry and demo) part of ICME 2020 provides the opportunity for industry leaders, start-up companies and academic institutions to showcase their innovative technologies and products. The expo will be co-located together with the technical poster presentations area, and will run continuously throughout the whole conference, thus representing a perfect opportunity to network and exchange ideas with internationally recognised researchers.

We are inviting companies to submit their contributions for the expo in the categories summarised below. We especially solicit start-ups, incubators and university consortia to participate in the demo program.

Industry papers, demo papers and hands-on demos: 
  • Will be presented during separate sessions during the main conference
  • Can be related to any of the technical areas covered by ICME
  • Should be submitted following the guidelines available online
  • At least one author of an accepted contribution should register for the conference and present the work 
Industry / application papers

Industry / application papers at ICME 2020 focus on technology and innovation developed towards solving real-world problems. Researchers and practitioners are encouraged to submit contributions, in the form of 4-page papers, describing practical applications of technology, and how multimedia technology can help in practical scenarios and/or commercial use cases. The papers will be reviewed following the same procedure as workshop papers, where novelty, presentation quality and experimental validation will be considered. The submission of papers is via CMT online system.

Demo papers
Demo papers are called for ICME 2020, which can be either proposed as an independent 2-page demo paper or associated with a paper from ICME 2020 main program and/or co-located workshops. The goal of the demo papers program is to promote applied research and applications, as well as facilitate collaborations between industrial and academic members of the multimedia community. A prospective paper should provide comprehensive descriptions on the innovative technology to be demonstrated, including the equipment involved and/or the set-up necessary for attendants to follow up an approach or system. The submission of papers is via CMT online system.

Hands-on demos
Proposals for industry hand-on (showcase) demos are invited to be part of ICME 2020. Each hands-on demo will be showed during poster sessions of one day of the main conference. This type of presentation is intended for R&D-focused demos by industrial partners to illustrate novel multimedia technology. Commercial products should be instead presented as part of the exhibition reserved for conference sponsors. Authors are invited to submit an abstract describing the technology being demonstrated, equipment that will be used, and the demo experience. The abstract should give details of the innovation showcased in the demo, and how it will appeal to the ICME audience. Demos should ideally have an interactive component. If the demo is associated with a paper submitted or accepted to ICME, please also provide the corresponding paper ID.

Monday, October 14, 2019

Happy World Standards Day 2019 - Video Standards Create a Global Stage

Today on October 14, we celebrate the World Standards Day, "the day honors the efforts of the thousands of experts who develop voluntary standards within standards development organizations" (SDOs). Many SDOs such as W3CIETF, ITU, ISO (incl. JPEG and MPEG) celebrate this with individual statements, highlighting the importance of standards and interoperability in today's information and communication technology landscape. Interestingly, this year's topic for the World Standards Day within ISO is about video standards creating a global stage. Similarly, national bodies of ISO provide such statements within their own country, e.g., the A.S.I. statement can be found here (note: in German). I have also blogged about the World Standards Day in 2017.

HEVC Emmy located at ITU-T, Geneva, CH (Oct'19).
The numbers for video content created, distributed (incl. delivery, streaming, ...), processed, consumed, etc. increases tremendously and, actually, more than 60 percent of today's world-wide internet traffic is attributed to video streaming. For example, almost 700,000 hours of video are watched on Netflix and 4.5 million videos are viewed on YouTube within a single internet minute in 2019. Videos are typically compressed (or encoded) prior to distribution and are decompressed (or decoded) before rendering on potentially a plethora of heterogeneous devices. Such codecs (portmanteau of coder-decoder) are subject to standardization and with AVC and HEVC (jointly developed by ISO/IEC MPEG and ITU-T VCEG) we have two successful standards which even have been honored with Primetime Engineering Emmy Awards (see one of them in the picture).

Within Austria, Bitmovin has been awarded with the Living Standards Award in 2017 for its contribution to the MPEG-DASH standard, which enables dynamic adaptive streaming over HTTP. This standard -- the 4th edition is becoming available very soon -- is now heavily deployed and has been adopted within products and services such as Netflix, Amazon Prime Video, YouTube, etc.

Standardization can be both source for and sink of research activities, i.e., development of efficient algorithms conforming to existing standards or research efforts leading to new standards. One example of such research efforts just recently started at the Institute of Information Technology (ITEC) at Alpen-Adria-Universit├Ąt Klagenfurt (AAU) as part of the ATHENA (AdapTive Streaming over HTTP and Emerging Networked MultimediA Services) project. The aim of this project is to research and develop novel paradigms, approaches, (prototype) tools and evaluation results for the phases (i) multimedia content provisioning (video coding), (ii) content delivery (video networking), (iii) content consumption (player) in the media delivery chain, and (iv) end-to-end aspects, with a focus on, but not being limited to, HTTP Adaptive Streaming (HAS).

The SDO behind these standards is MPEG (officially ISO/IEC JTC 1/SC 29/WG 11), which has a proven track record of producing very successful standards (not only those mentioned as examples above) and its future is currently discussed within its parent body (SC 29). A possible MPEG future is described here, which suggests upgrading the current SC 29 working groups to sub-committees (SCs), specifically to spin-off a new SC that basically covers MPEG while the remaining WG (JPEG) arises within SC 29. This proposal of MPEG and JPEG as SC is partially motivated by the fact that both WGs work on a large set of standardization projects, actually developed by its subgroups. Thus, elevating both WGs (JPEG & MPEG) to SC level would only reflect the current status quo but would also preserve two important brands for both academia and industry. Further details can be found at http://mpegfuture.org/.

Tuesday, September 10, 2019

2019 Global Internet Phenomena Report: more than 60 percent is Video Streaming

Source: Sandvine, Sep 10, 2019.
The 2019 Global Internet Phenomena Report has been published on September 10, 2019 and is available here.  I've previously posted about this in 2015, 2018, and Feb 2019 (related to mobile). Thus, it's also interesting to compare this report with what has been posted previously, specifically with respect to the 2018 report...

The 2019 global report reveals that video streaming now covers more than 60% of the internet traffic (see figure on the right) but only a small increase compared to last year (+2.9). We may question whether the 80% (or more) will be reached by 2022 as predicted by some reports (note: I assume this one here is meant).  The question is whether 4K or 8K will help to make the predictions become reality; we will see pretty soon.

Interestingly, Netflix' application traffic share decreased by 2.3 percentage points and is now about 12.6% while other HTTP media streaming traffic reached 12.8%. "Operator IPTV" increased to 7.2% with +2.8 compared to last year. For example, in Americas "Operator IPTV" has even a higher downstream application traffic share (15%) than Netflix (12.87%).

From a European perspective, we see QUIC among the top 10 of "EMEA: Downstream Application Traffic Share" with 3.1% but, unfortunately, the report does not provide further details about what that actually means.

The "Spotlight: Streaming Video Traffic Share" is shown in the figure below and reveals that both Netflix and YouTube have a higher traffic share in EMEA than Americas and "Operator IPTV" is only mentioned in Americas and not at all in EMEA or APAC.

Source: Sandvine, Sep 10, 2019.

As mentioned in the beginning, the full report is available here -- covering also other aspects -- but it confirms that global video traffic share increases but probably with smaller steps than anticipated some years ago.

Monday, September 9, 2019

Video Developer Report 2019

... and Bitmovin did it again; published the 2019 Video Developer Report last week. I've briefly reported about it last year here. Interestingly, this year 542 people from 108 countries participated (vs. 456 from over 67 countries last year).

The biggest challenges seem to be latency (54%) and playback on all devices (41%). Other challenges (>20%) are related to DRM, CDN, user engagement with video, and ads in general.

Last year I've also shared the codec usage and it's probably interesting to compare these numbers with this year's results as shown below. Interestingly, the numbers (for 'planning to implement') are a bit lower compared to last year which could be explained by a more conservative approach from developers or simply by the fact that more people responded to the survey with a greater diversity in terms of different countries.

Current Video Codec Usage and Plans to Implement in next 12 Months.
The actual video codec usage compares to last year's report as follows: AVC (-1), HEVC (+1), VP9 (+/- 0), AV1 (+1).

Another interesting aspect is the usage of streaming formats and plans to implement them within the next 12 months as shown below. Comparing with last year's report (available here), we can observe the following major changes: HLS (-3), MPEG-DASH (-3), RTMP (-2), Smooth Streaming (+2), Progressive Streaming (-1), MPEG-CMAF (+2), HDS (-4).

Current Streaming Formats and Plans to Implement in next 12 Months.

In general, one can observe that the adoption of new formats are happening at a slower pace than expected and I am wondering what this means for the new video coding formats coming up like VVC et al. (note: these are results from a public survey with different participants compared to last years which need to be taken into account when comparing results over years).

For more details, the full report can be downloaded for free from here.

Thursday, September 5, 2019

ACMMM'19: Docker-Based Evaluation Framework for Video Streaming QoE in Broadband Networks

Docker-Based Evaluation Framework for Video Streaming QoE in Broadband Networks
(Demo Paper)


[PDF]

Cise Midoglu (Simula), Anatoliy Zabrovskiy (AAU), Özgü Alay (Simula), Daniel Hölbling-Inzko (Bitmovin), Carsten Griwodz (Univ. of Oslo), Christian Timmerer (AAU/Bitmovin)

Abstract: Video streaming is one of the top traffic contributors in the Internet and a frequent research subject. It is expected that streaming traffic will grow 4-fold for video globally and 9-fold for mobile video between 2017 and 2022. In this paper, we present an automatized measurement framework for evaluating video streaming QoE in operational broadband networks, using headless streaming with a Docker-based client, and a server-side implementation allowing for the use of multiple video players and adaptation algorithms. Our framework allows for integration with the MONROE testbed and Bitmovin Analytics, which bring on the possibility to conduct large-scale measurements in different networks, including mobility scenarios, and monitor different parameters in the application, transport, network, and physical layers in real-time.

Keywords: adaptive streaming, network measurements, OTT video analytics, QoE

Wednesday, August 21, 2019

MPEG news: a report from the 127th meeting, Gothenburg, Sweden

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects. Additionally, this version of the blog post will be also posted at ACM SIGMM Records.

MPEG News Archive

Plenary of the 127th MPEG Meeting in Gothenburg, Sweden.
The 126th MPEG meeting concluded on March 29, 2019 in Geneva, Switzerland with the following topics:
  • Versatile Video Coding (VVC) enters formal approval stage, experts predict 35-60% improvement over HEVC
  • Essential Video Coding (EVC) promoted to Committee Draft
  • Common Media Application Format (CMAF) 2nd edition promoted to Final Draft International Standard
  • Dynamic Adaptive Streaming over HTTP (DASH) 4th edition promoted to Final Draft International Standard
  • Carriage of Point Cloud Data Progresses to Committee Draft
  • JPEG XS carriage in MPEG-2 TS promoted to Final Draft Amendment of ISO/IEC 13818-1 7th edition
  • Genomic information representation – WG11 issues a joint call for proposals on genomic annotations in conjunction with ISO TC 276/WG 5
  • ISO/IEC 23005 (MPEG-V) 4th Edition – WG11 promotes the Fourth edition of two parts of “Media Context and Control” to the Final Draft International Standard (FDIS) stage

The corresponding press release of the 127th MPEG meeting can be found here: https://mpeg.chiariglione.org/meetings/127

Versatile Video Coding (VVC)

The Moving Picture Experts Group (MPEG) is pleased to announce that Versatile Video Coding (VVC) progresses to Committee Draft, experts predict 35-60% improvement over HEVC.

The development of the next major generation of video coding standard has achieved excellent progress, such that MPEG has approved the Committee Draft (CD, i.e., the text for formal balloting in the ISO/IEC approval process).

The new VVC standard will be applicable to a very broad range of applications and it will also provide additional functionalities. VVC will provide a substantial improvement in coding efficiency relative to existing standards. The improvement in coding efficiency is expected to be quite substantial – e.g., in the range of 35–60% bit rate reduction relative to HEVC although it has not yet been formally measured. Relative to HEVC means for equivalent subjective video quality at picture resolutions such as 1080p HD or 4K or 8K UHD, either for standard dynamic range video or high dynamic range and wide color gamut content for levels of quality appropriate for use in consumer distribution services. The focus during the development of the standard has primarily been on 10-bit 4:2:0 content, and 4:4:4 chroma format will also be supported.

The VVC standard is being developed in the Joint Video Experts Team (JVET), a group established jointly by MPEG and the Video Coding Experts Group (VCEG) of ITU-T Study Group 16. In addition to a text specification, the project also includes the development of reference software, a conformance testing suite, and a new standard ISO/IEC 23002-7 specifying supplemental enhancement information messages for coded video bitstreams. The approval process for ISO/IEC 23002-7 has also begun, with the issuance of a CD consideration ballot.

Research aspects: VVC represents the next generation video codec to be deployed in 2020+ and basically the same research aspects apply as for previous generations, i.e., coding efficiency, performance/complexity, and objective/subjective evaluation. Luckily, JVET documents are freely available including the actual standard (committee draft), software (and its description), and common test conditions. Thus, researcher utilizing these resources are able to conduct reproducible research when contributing their findings and code improvements back to the community at large. 

Essential Video Coding (EVC)

MPEG-5 Essential Video Coding (EVC) promoted to Committee Draft

Interestingly, at the same meeting as VVC, MPEG promoted MPEG-5 Essential Video Coding (EVC) to Committee Draft (CD). The goal of MPEG-5 EVC is to provide a standardized video coding solution to address business needs in some use cases, such as video streaming, where existing ISO video coding standards have not been as widely adopted as might be expected from their purely technical characteristics.

The MPEG-5 EVC standards includes a baseline profile that contains only technologies that are over 20 years old or are otherwise expected to be royalty-free. Additionally, a main profile adds a small number of additional tools, each providing significant performance gain. All main profile tools are capable of being individually switched off or individually switched over to a corresponding baseline tool. Organizations making proposals for the main profile have agreed to publish applicable licensing terms within two years of FDIS stage, either individually or as part of a patent pool.

Research aspects: Similar research aspects can be described for EVC and from a software engineering perspective it could be also interesting to further investigate this switching mechanism of individual tools or/and fall back option to baseline tools. Naturally, a comparison with next generation codecs such as VVC is interesting per se. The licensing aspects itself are probably interesting for other disciplines but that is another story...

Common Media Application Format (CMAF)

MPEG ratified the 2nd edition of the Common Media Application Format (CMAF)

The Common Media Application Format (CMAF) enables efficient encoding, storage, and delivery of digital media content (incl. audio, video, subtitles among others), which is key to scaling operations to support the rapid growth of video streaming over the internet. The CMAF standard is the result of widespread industry adoption of an application of MPEG technologies for adaptive video streaming over the Internet, and widespread industry participation in the MPEG process to standardize best practices within CMAF.

The 2nd edition of CMAF adds support for a number of specifications that were a result of significant industry interest. Those include
  • Advanced Audio Coding (AAC) multi-channel;
  • MPEG-H 3D Audio;
  • MPEG-D Unified Speech and Audio Coding (USAC);
  • Scalable High Efficiency Video Coding (SHVC);
  • IMSC 1.1 (Timed Text Markup Language Profiles for Internet Media Subtitles and Captions); and
  • additional HEVC video CMAF profiles and brands.
This edition also introduces CMAF supplemental data handling as well as new structural brands for CMAF that reflects the common practice of the significant deployment of CMAF in industry. Companies adopting CMAF technology will find the specifications introduced in the 2nd Edition particularly useful for further adoption and proliferation of CMAF in the market.

Research aspects: see below (DASH).

Dynamic Adaptive Streaming over HTTP (DASH)

MPEG approves the 4th edition of Dynamic Adaptive Streaming over HTTP (DASH)

The 4th edition of MPEG-DASH comprises the following features:
service description that is intended by the service provider on how the service is expected to be consumed;
  • a method to indicate the times corresponding to the production of associated media;
  • a mechanism to signal DASH profiles and features, employed codec and format profiles; and
  • supported protection schemes present in the Media Presentation Description (MPD).
It is expected that this edition will be published later this year. 

Research aspects: CMAF 2nd and DASH 4th edition come along with a rich feature set enabling a plethora of use cases. The underlying principles are still the same and research issues arise from updated application and service requirements with respect to content complexity, time aspects (mainly delay/latency), and quality of experience (QoE). The DASH-IF awards the excellence in DASH award at the ACM Multimedia Systems conference and an overview about its academic efforts can be found here. For example, see here our recent research on bandwidth prediction in low-latency chunked streaming. Additionally, our tutorial at ACM Multimedia 2019 about a journey towards fully immersive media access reviews state of the art in this area and how it could be extended enabling 6DoF HAS services through point cloud compression.

Carriage of Point Cloud Data

MPEG progresses the Carriage of Point Cloud Data to Committee Draft

At its 127th meeting, MPEG has promoted the carriage of point cloud data to the Committee Draft stage, the first milestone of ISO standard development process. This standard is the first one introducing the support of volumetric media in the industry-famous ISO base media file format family of standards.

This standard supports the carriage of point cloud data comprising individually encoded video bitstreams within multiple file format tracks in order to support the intrinsic nature of the video-based point cloud compression (V-PCC). Additionally, it also allows the carriage of point cloud data in one file format track for applications requiring multiplexed content (i.e., the video bitstream of multiple components is interleaved into one bitstream).

This standard is expected to support efficient access and delivery of some portions of a point cloud object considering that in many cases that entire point cloud object may not be visible by the user depending on the viewing direction or location of the point cloud object relative to other objects. It is currently expected that the standard will reach its final milestone by the end of 2020.

Research aspects: MPEG's Point Cloud Compression (PCC) comes in two flavors, video- and geometric-based but still requires to be packaged into file and delivery formats. MPEG's choice here is the ISO base media file format and the efficient carriage of point cloud data is characterized by both functionality (i.e., enabling the required used cases) and performance (such as low overhead).

MPEG 2 Systems/Transport Stream

JPEG XS carriage in MPEG-2 TS promoted to Final Draft Amendment of ISO/IEC 13818-1 7th edition

At its 127th meeting, WG11 (MPEG) has extended ISO/IEC 13818-1 (MPEG-2 Systems) – in collaboration with WG1 (JPEG) – to support ISO/IEC 21122 (JPEG XS) in order to support industries using still image compression technologies for broadcasting infrastructures. The specification defines a JPEG XS elementary stream header and specifies how the JPEG XS video access unit (specified in ISO/IEC 21122-1) is put into a Packetized Elementary Stream (PES). Additionally, the specification also defines how the System Target Decoder (STD) model can be extended to support JPEG XS video elementary streams.

Genomic information representation

WG11 issues a joint call for proposals on genomic annotations in conjunction with ISO TC 276/WG 5

The introduction of high-throughput DNA sequencing has led to the generation of large quantities of genomic sequencing data that have to be stored, transferred and analyzed. So far WG 11 (MPEG) and ISO TC 276/WG 5 have addressed the representation, compression and transport of genome sequencing data by developing the ISO/IEC 23092 standard series also known as MPEG-G. They provide a file and transport format, compression technology, metadata specifications, protection support, and standard APIs for the access of sequencing data in the native compressed format.

An important element in the effective usage of sequencing data is the association of the data with the results of the analysis and annotations that are generated by processing pipelines and analysts. At the moment such association happens as a separate step, standard and effective ways of linking data and meta information derived from sequencing data are not available.

At its 127th meeting, MPEG and ISO TC 276/WG 5 issued a joint Call for Proposals (CfP) addressing the solution of such problem. The call seeks submissions of technologies that can provide efficient representation and compression solutions for the processing of genomic annotation data.

Companies and organizations are invited to submit proposals in response to this call. Responses are expected to be submitted by the 8th January 2020 and will be evaluated during the 129th WG 11 (MPEG) meeting. Detailed information, including how to respond to the call for proposals, the requirements that have to be considered, and the test data to be used, is reported in the documents N18648, N18647, and N18649 available at the 127th meeting website (http://mpeg.chiariglione.org/meetings/127). For any further question about the call, test conditions, required software or test sequences please contact: Joern Ostermann, MPEG Requirements Group Chair (ostermann@tnt.uni-hannover.de) or Martin Golebiewski, Convenor ISO TC 276/WG 5 (martin.golebiewski@h-its.org).

ISO/IEC 23005 (MPEG-V) 4th Edition

WG11 promotes the Fourth edition of two parts of “Media Context and Control” to the Final Draft International Standard (FDIS) stage

At its 127th meeting, WG11 (MPEG) promoted the 4th edition of two parts of ISO/IEC 23005 (MPEG-V; Media Context and Control) standards to the Final Draft International Standard (FDIS). The new edition of ISO/IEC 23005-1 (architecture) enables ten new use cases, which can be grouped into four categories: 3D printing, olfactory information in virtual worlds, virtual panoramic vision in car, and adaptive sound handling. The new edition of ISO/IEC 23005-7 (conformance and reference software) is updated to reflect the changes made by the introduction of new tools defined in other parts of ISO/IEC 23005. More information on MPEG-V and its parts 1-7 can be found at https://mpeg.chiariglione.org/standards/mpeg-v.

Finally, the unofficial highlight of the 127th MPEG meeting we certainly found while scanning the scene in Gothenburg on Tuesday night...