Showing posts with label mpeg-u. Show all posts
Showing posts with label mpeg-u. Show all posts

Sunday, May 16, 2010

MPEG Press Release: Proposals for HEVC demonstrate substantial compression gains for video coding

--full press release available here.
Dresden, DE – The 92nd MPEG meeting was held in Dresden, Germany from the 19th to the 23rd of April 2010.

Highlights of the 92nd Meeting
  • Joint Collaborative Team on Video Coding evaluates 27 proposals for HEVC
  • MPEG issues Call for Proposals for streaming MPEG media over HTTP
  • MPEG-U for Rich Media User Interfaces is completed
  • New BIFS Profile for enhanced Mobile Services is ready to be deployed
  • Amendment to MPEG-7 Defines Robust Technology for Video Signatures
Details for each highlight can be found in the full press release.

Digging Deeper – How to Contact MPEG

Communicating the large and sometimes complex array of technology that the MPEG Committee has developed is not a simple task. The experts past and present have contributed a series of white-papers and vision documents that explain each of these standards individually. The repository is growing with each meeting, so if something you are interested is not there yet, it may appear there shortly – but you should also not hesitate to request it. You can start your MPEG adventure at: http://mpeg.chiariglione.org/technologies.htm.

Saturday, October 31, 2009

MPEG news: a report from the 90th meeting in Xi'an, China

The 90th MPEG meeting in Xi’an, China is coming up with some very interesting news which are briefly highlighted here. First and, I think, most importantly, the timeline for the new MPEG/ITU-T video coding format has been discussed and it seems the final Call for Proposals (CfP) will be ready in January 2010. A draft CfP is available now and hopefully will be also publicly available if they solve all the editing issues until early November. This means that the proposals will be evaluated in April 2010 (note: this will be a busy meeting as a couple of other calls need to be evaluated too; see later). The CfP defines five classes of test sequences with the following characteristics (number of sequences available in brackets):
  • Class A with 2560x1600 cropped from 4Kx2K (2);
  • Class B with 1920x1080p at 24/50-60 fps (5);
  • Class C with 832x480 WVGA (4);
  • Class D with 416x240 WQVGA (4); and
  • Class E with 1280x720p at 50-60fps (3).
For classes B, C, and E subjective tests will be performed whereas classes A and D will be only evaluated objectively using PSNR. The reason for evaluating A and D using objective measurements is due to its insignificant subjective differences with B and C respectively. Finally, they’re still discussing about the actual common nickname name of the standard as it seems some are not happy with high-performance video coding but that’s yet another story…

Second, 3D video coding is still a major topic in MPEG but you probably need to wait yet another year until a Call for Proposals will be issued. That is, a 3DV standard will be probably available around the beginning of 2013 at the earliest. The major issue right now is the availability of content – as usual – and different device manufacturer standards with respect to 3D video.

The third major topic at this meeting was around AIT and MMT, two acronyms you shall become more familiar in the future. The former is referred to as Advanced IPTV Terminal (AIT) and aims to develop an ecosystem for media value chains and networks. Therefore, basic (atomic) services will be defined including protocols (payload formats) to enable users to call these services, Application Programming Interfaces to access services, and bindings to specific programming languages. Currently, 30 of these basic services are foreseen which can be clustered in services pertaining to the identification, authentication, description, storage, adaptation, posting, packaging, delivery, presentation, interaction, aggregation, management, search, negotiation, and transaction. The timeline is similar as for HVC which means that proposals will be evaluated in April 2010. The latter is referred to as MPEG Media Transport (MMT) and basically aims to become a successor of the well-known MPEG-2 Transport Stream. Currently, two topics are explored for which also requirements have been formulated. The first topic covers adaptive, progressive transport and the second topic is in the area of cross-layer design. Further topics where this activity might look into are hybrid delivery and conversational services. As for HVC and AIT, the proposals are going to be evaluated in April 2010. However, in order to further refine this possible new work item, MPEG will held a workshop in January 2010 on the Wednesday during the Kyoto meeting focusing on “adaptive progressive transport” and “cross-layer design”.

However, MPEG is looking forward to a very busy meeting in April 2010 which by the way will be held in Dresden, Germany.

Another issue that has been discussed in Xi’an was (again) the development of a royalty free codec within MPEG. While some might say that within MPEG, trying to establish a royalty free codec is a first step towards failure, others argue that MPEG-1 is already royalty free, for MPEG-2 most patents expire in 2011, the Internet community is requesting this (e.g., IETF established coded group and Google has chosen On2, a royalty free codec), and, finally, MPEG-4 Part 10 royalty free baseline basically failed. Thus, maybe (or hopefully) this is the right time for a royalty free codec within MPEG and who can predict the future? Anyway, there’s some activity going on in this area and if you’re interested, stay tuned…

Finally, I’d like to note that MPEG-V (Media Context & Control) and MPEG-U (Rich Media User Interface) are progressing smoothly and both going hand in hand towards its finalization. This meeting, the FCDs have been approved which forms a major milestone as this was the last chance for substantial new contributions. One such input was related to advanced user interaction like the Wiimote, etc. which will become part of MPEG-V but used also by MPEG-U. Hence, one might argue merging these two standards into one single standard called MPEG-W (i.e., U+V=W) and a wedding ceremony could be performed at the next meeting in Kyoto with Geishas as witnesses … why not? Please raise your voice now or be silent forever!

Monday, July 27, 2009

MPEG Global Conference points the way to Ultra HD online services

London meeting sees significant improvement in compression for High Performance Video Coding

London, United Kingdom – The 89th MPEG meeting was held in London, United Kingdom from the 29th of June to the 3rd of July 2009.

Highlights of the 89th Meeting

Responses for Evidence Evaluated for HVC
During its 89th meeting, MPEG evaluated responses that were received on the Call for Evidence on High-Performance Video Coding (HVC), issued to obtain evidence of video coding technology providing compression capability clearly higher than that provided by the existing AVC standard (ITU-T H.264 | ISO/IEC 14496-10). Significant gains in compression were found when an assessment was made based on information brought by the contributors. A subjective comparison was performed in a blind test with a set of video test sequences encoded by the AVC High Profile at matching rate points. Gains were demonstrated for test cases ranging from resolutions as low as 412x240 pixels (Wide QVGA) up to resolutions for ultra-high definition. MPEG has therefore concluded that the development of the next generation of video compression technology is to be started with the issuing of a formal Call for Proposals by the next meeting.

AVC Extended with New Profiles for Baseline and MVC Technologies

At the 89th meeting, the AVC standard (ITU-T H.264 | ISO/IEC 14496-10) was further extended with the issuing of a Final Draft Amendment (FDAM) ballot containing the specification of two new profiles and new supplemental enhancement information. The first of the new profiles is the Constrained Baseline Profile, which forms the maximally-interoperable set of coding tools from the most widely deployed of existing profiles (the Baseline and High Profiles). The second new profile is a special case of multivew video coding (MVC) called the Stereo High Profile. The Stereo High profile enables all of the coding tools of the High Profile along with inter-view prediction capability for two-view (stereo) video applications such as 3D entertainment video.

Additionally, a new supplemental enhancement information (SEI) message has been defined for AVC. This new message – called the frame packing arrangement SEI message – enables the encoder to indicate to the decoder how to extract two distinct views of a video scene from a single decoded frame. The message also serves as a way to support stereo-view video in applications that require full compatibility with prior decoder designs that are not capable of supporting the new Stereo High Profile.

MPEG Promotes Technologies to link Real and Virtual Worlds

At its 88th meeting, MPEG had published a new call for proposals (N10526) with updated requirements (N10235) for an extension of the Media Context and Control project.

The technical contributions related to haptic and tactile devices, emotions, and virtual goods received at its 89th meeting have enabled MPEG to build a complete framework for defining haptic properties on top of virtual objects and to control haptic devices. This is now part of ISO/IEC 23005 or MPEG-V, a standard (formerly called Information Exchange with Virtual Worlds) providing a global framework and associated data representations to enable the interoperability between different virtual worlds (e.g. a digital content provider of a virtual world, a game with the exchange of real currency, or a simulator) and between virtual worlds and the real world (sensors, actuators, robotics, travel, real estate, or other physical systems).

MPEG Progresses Media Context and Control Project

MPEG has also advanced to the Committee Draft stage four parts of MPEG-V. The first part describes the architecture of the standard. The second part, “Control Information”, provides metadata representation of device capabilities and user preferences to be used for the information exchange between a controlling device and the real actuator or sensors. The third part, “Sensory Information”, provides metadata to represent sensory effects such as temperature, wind, vibration, fog, and more. The fourth part, “Avatar Characteristics”, provides metadata to commonly represent information about Avatars for the exchange of virtual characters between virtual worlds.

MPEG Hosts MXM Developer’s Day

The first MXM Developer’s Day workshop has been hosted by MPEG during its 89th meeting. The workshop featured demonstrations by companies and organisations that are developing MXM standards and applications. MXM, currently at its Final Committee Draft stage, provides specifications of APIs and an open source implementation (released under the BSD licence) to access various MPEG standards for easy deployment of applications. In this workshop detailed information about the APIs currently under standardization has been provided and several interesting demonstrations with the potential to create new business opportunities have also been presented. More information about this workshop can be found at http://mxm.wg11.sc29.org.

Rich Media User Interface Moves toward Completion

At its 89th meeting, MPEG has also advanced MPEG Rich Media UI (ISO/IEC 23007 or MPEG-U), to the Committee Draft stage. MPEG-U standardizes widget packaging, delivery, representation and communication formats. In its current draft, MPEG-U adopts and extends the W3C widget representation to provide a complete framework that can be used also in a non-Web based environment without a browser. Additionally, this standard enables communication among widgets on the same device or different devices, and other applications to better support connected environments.

Visual Signatures Enable New Applications

MPEG’s Visual Signatures define the world’s first standardized tools for content-based identification of any visual content even in very large databases, e.g. on the web. These tools enable a range of new applications including semantic linking, library management, metadata association (e.g. title, photographer, director, etc.) and content usage tracking. In the same way that a fingerprint or signature identifies a person, a Visual Signature is a compact descriptor uniquely representing either an image or video. The descriptor is derived directly from analysis of the visual content and is robust to heavy compression and editing.

The Image Signature and Video Signature are two separate amendments to MPEG-7. Collectively the two amendments are referred to as the MPEG-7 Visual Signatures. At the London meeting, the Video Signature advanced to the Proposed Draft Amendment (PDAM) stage with a target completion date of July 2010. The Image Signature was published as an ISO/IEC standard in April 2009.

Mobile Services to Be Enhanced by New BIFS Profile
At this meeting, MPEG advanced the new BInary Format for Scenes (BIFS) profile to the Committee Draft stage by incorporating additional nodes and technologies submitted as responses to the Call for Proposals for new BIFS technologies. The requirements for this profile (provided in N10567) originated from organizations of various industries and SDOs for digital radio and mobile television broadcasting. This profile will enable the development of more efficient and enhanced interactive services for mobile broadcasting services including digital radio or mobile television on small handheld devices. Moreover, it is backward compatible with Core2D@Level1 which is widely adopted by the industry.

Contact MPEG

Digging Deeper Once Again
Communicating the large and sometimes complex array of technology that the MPEG Committee has developed is not a simple task. The experts past and present have contributed a series of white-papers that explain each of these standards individually. The repository is growing with each meeting, so if something you are interested is not there yet, it may appear there shortly – but you should also not hesitate to request it. You can start your MPEG adventure at: http://www.chiariglione.org/mpeg/mpeg-tech.htm

Ends

Further Information
Future MPEG meetings are planned as follows:
No. 90, Xian, CN, 26-30 October, 2009
No. 91, Kyoto, JP, 18-22 January, 2010
For further information about MPEG, please contact:
Dr. Leonardo Chiariglione (Convener of MPEG, Italy)
Via Borgionera, 103
10040 Villar Dora (TO), Italy
Tel: +39 011 935 04 61
Email: mailto:leonardo@chiariglione.org
or
Dr. Arianne T. Hinds
Ricoh | IBM InfoPrint Solutions Company
6300 Diagonal Highway, MS 04N
Boulder, CO 80301, USA
Tel +1 720 663 3565
Email: arianne.hinds@infoprint.com

This press release and other MPEG-related information can be found on the MPEG homepage:
http://www.chiariglione.org/mpeg
The text and details related to the Calls mentioned above (together with other current Calls) are in the Hot News section, http://www.chiariglione.org/mpeg/hot_news.htm. These documents include information on how to respond to the Calls.
The MPEG homepage also has links to other MPEG pages which are maintained by the MPEG subgroups. It also contains links to public documents that are freely available for download by those who are not MPEG members. Journalists that wish to receive MPEG Press Releases by email should contact Dr. Arianne T. Hinds using the contact information provided above.

Friday, July 3, 2009

MPEG news: a report from the 89th meeting in London, UK

A lot of interesting things happened at this meeting, notably the MXM Developer's Day, the Modern Media Transport workshop, MPEG-V and MPEG-U have been promoted to committee draft, and for MPEG High-performance Video Coding (HVC) enough evidence has been provided in order to start working towards a Call for Proposals (CfP).

The MXM Developer's Day was a great success with 45+ participants and all presentations are publicly available. Leonardo presented the MXM Vision while Filippo and Marius concentrated on the MXM Architecture and API respectively. This introductory session was followed by practical examples and demonstrations:
The workshop on Modern Media Transport (MMT) had even more participants (80+) and was clustered into two session. Session one was focusing on industry practice and presentations where given on how MPEG-2 TS and MP4 is being used. Furthermore, the DVB activity in the area of IPTV and InternetTV was presented. All the presentations will be publicly available through the MPEG Web site. The conclusion was that although MPEG-2 TS / MP4 is heavily used, it has some drawbacks due to their popularity. That is, MPEG-2 TS is running out of code points which is an issue. On the other hand, if MPEG is going to standardize something new, it has been recognized that it has to be to substantially better than what exists on the market with a high demand of backwards-compatibility to MPEG-2 TS. The issue will be further studied and stay tuned!

MPEG-V also known as Media Context and Control has promoted four parts to committee draft. The four parts are as follows:
  • Part 1: Architecture
  • Part 2: Control Information
  • Part 3: Sensory Information
  • Part 4: Avatar Characteristics
I've provided an overview during the final plenary and the presentation is accessible here.

MPEG-U is about Widgets and has been promoted to committee draft also. It seems to be an interesting activity which has a relationship to W3C's Widget activity. It will be interesting to see how these two standards co-exist.

Finally, the call for evidence for High-performance Video Coding (HVC) provided the following result: "Yes, we have enough evidence about improved compression technology (compared to AVC HP)". Thus, MPEG started working towards a call for proposals and a time schedule has been created. Furthermore, the future collaboration between MPEG and VCEG has been discussed.

That's it for now but I'll provide more details on the individual topics later. Please stay tuned!