-- complete document can be found here
MPEG has been working towards a new framework that aims to specify a coded representation for 3D scene (including multiview video, depth and supplementary information). This representation specifically targets the generation of high-quality intermediate views at the receiver for auto-stereoscopic displays or stereoscopic display processing. Please refer to MPEG’s vision document on 3D Video [1].
In the process of developing a reference representation and corresponding set of view generation techniques, multiview video test data has been collected and is available at the links as listed in Appendix A. However, there is currently a lack of high-quality depth map data for these test sequences. Automatic depth estimation techniques have not yet been able to provide sufficient accuracy and robustness for high quality view synthesis. We therefore call for high quality depth map data for the multiview test sequences in Appendix A. Further requirements are specified in the next section.
In addition to depth maps, supplementary information such as background data, occlusion, transparency, and segmentation masks, are also being called for, when available.
New stereo and multiview test sequences that fulfil the requirements for test material described in [2] are also welcome together with appropriate depth maps.
Data Requirements
This section outlines the requirements for depth map and supplementary data. This data may be created by any means, including semi-automatic and manual generation methods. Most popular uncompressed image/video formats including RGB, TIF, PNG, RAW and YUV data formats would be acceptable. Documentation for any non-standard formats should be provided.
Depth Data
Data for multiview depth videos (i.e., monochrome depth sequences) of a scene from different viewpoints and view directions are sought. It is desirable to receive one corresponding depth video per video view, but depth maps for a subset of input video views (at least two non-adjacent views) are also welcome. It is also desirable that the depth maps have pixel-level accuracy that closely corresponds to the objects in the scene. Furthermore, the depth maps should have consistency over time.
The necessary data for correct interpretation of depth values shall be provided. This includes near and far clipping planes (Z_near, Z_far) as well as definition if the given data are directly Z values or rather 1/Z, preferably. If any other type of data is provided, this shall be fully specified, including algorithms how to convert to Z values.
Supplementary Data
In addition to data for multiview depth videos of a scene, other supplementary data (such as occlusion textures, occlusion depth maps and transparencies) from different viewpoints and view directions are sought. It is desirable to receive this supplementary data per video view, but supplementary data for a subset of input video views are also welcome. It is also desirable that supplementary data have pixel-level accuracy that closely corresponds to the objects in the scene. Furthermore, the supplementary data should have consistency over time.
The necessary data for correct interpretation of the supplementary data shall be provided. In contrast to depth maps, in which there exist known methods and software tools to use depth for view synthesis, it is expected that accompanying software to perform view synthesis based on the supplementary data would be provided. This is essential to properly evaluate the effectiveness of the data and related techniques to be used a reference.
Copyright
The test material (and any accompanying software) should be provided free of charge and available for use by members of the standardization committee and respondents to a future Call for Proposals related to the development of a related 3D Video standard. Even more desirable would be a free donation to the scientific community, i.e. allowance to be used for publications etc. Donators should provide a copyright notice with any contributed material making term of usage clear. An example is given in the Appendix B.
Logistics & Contact
MPEG would like to receive contributions in time for the 88th MPEG meeting to be held in Maui, USA from April 20-24, 2009. It is requested that a document be submitted to the next meeting that includes a link to relevant test materials and details further information about the materials being contributed. Those interested in making a contribution are advised to contact the persons listed below for further details.
It would be ideal for contributors to attend the meeting in person in order to allow discussions on details of the contributions. Although regular participation to MPEG meetings is subject to some guidelines, non-MPEG respondents will be allowed to participate in this meeting.
Prof. Jens-Rainer Ohm
(MPEG Video Subgroup Chair)
RWTH Aachen University
Institut für Nachrichtentechnik, 52056 Aachen, Germany
Phone: +49-241-80-27671
Fax: +49-241-80-22196
Email: ohm@ient.rwth-aachen.de
Dr. Anthony Vetro
(3D Video Ad-hoc Group Chair)
Mitsubishi Electric Research Labs
201 Broadway, 8th Floor
Cambridge, MA 02139 USA
Phone: +1-617-621-7591
Fax: +1-617-621-7550
Email: avetro@merl.com
References
[1] MPEG video group, “3D Video Vision,” ISO/IEC JTC1/SC29/WG11 N103xx, Lausanne, Switzerland, January 2009.
[2] MPEG video group, “Call for Contributions on 3D Video Test Material,” ISO/IEC JTC1/SC29/WG11 N9595, Antalya, Turkey, January 2008.
No comments:
Post a Comment