Showing posts with label point cloud. Show all posts
Showing posts with label point cloud. Show all posts

Thursday, July 16, 2020

MPEG131 Press Release: Carriage of Geometry-based Point Cloud Data progresses to Committee Draft

MPEG131 Press Release: Index

Carriage of Geometry-based Point Cloud Data progresses to Committee Draft

At its 131st meeting, WG11 (MPEG) has promoted the carriage of Geometry-based point cloud data (ISO/IEC 23090-18) to the Committee Draft stage, the first milestone of ISO standard development process. This standard is the second standard introducing the support of volumetric media in the industry-famous ISO base media file format (ISOBMFF) family of standards after the standard on the carriage of video-based point cloud data (ISO/IEC 23090-10). This standard (i.e., ISO/IEC 23090-18) supports the carriage of point cloud data within multiple file format tracks in order to support individual access of each attributes comprising a single point cloud. Additionally, it also allows the carriage of point cloud data in one file format track for simple applications. Understanding the point cloud data could cover large geographical area and the size of the data could be massive in some application the standard support 3D region-based partial access of the data stored in the file so that the application can efficiently access the portion of data required to be processed. It is currently expected that the standard will reach its final milestone by mid-2021.

MPEG131 Press Release: Point Cloud Compression – WG11 (MPEG) promotes a Video-based Point Cloud Compression Technology to the FDIS stage

MPEG131 Press Release: Index

Point Cloud Compression – WG11 (MPEG) promotes a Video-based Point Cloud Compression Technology to the FDIS stage

At its 131st meeting, WG11 (MPEG) promoted its Video-based Point Cloud Compression (V-PCC) standard to Final Draft International Standard (FDIS) stage. V-PCC addresses lossless and lossy coding of 3D point clouds with associated attributes such as colors and reflectance. Point clouds are typically represented by extremely large amounts of data, which is a significant barrier for mass market applications. However, the relative ease to capture and render spatial information as point clouds compared to other volumetric video representations makes point clouds increasingly popular to present immersive volumetric data. With the current V-PCC encoder implementation providing compression in the range of 100:1 to 300:1, a dynamic point cloud of one million points could be encoded at 8 Mbit/s with good perceptual quality. Real-time decoding and rendering of V-PCC bitstreams have also been demonstrated on current mobile hardware.

The V-PCC standard leverages video compression technologies and the video eco-system in general (hardware acceleration, transmission services, and infrastructure) while enabling new kinds of applications. The V-PCC standard contains several profiles that leverage existing AVC and HEVC implementations, which may make them suitable to run on existing and emerging platforms. The standard is also extensible to upcoming video specifications such as Versatile Video Coding (VVC) and Essential Video Coding (EVC).

The V-PCC standard is based on Visual Volumetric Video-based Coding (V3C), which is expected to be re-used by other MPEG-I volumetric codecs under development. MPEG is also developing a standard for the carriage of V-PCC and V3C data (ISO/IEC 23090-10) which has been promoted to DIS status at the 130th MPEG meeting.

By providing high-level immersiveness at currently available bandwidths, the V-PCC standard is expected to enable several types of applications and services such as six Degrees of Freedom (6 DoF) immersive media, virtual reality (VR) / augmented reality (AR), immersive real-time communication and cultural heritage.

Thursday, June 7, 2018

Packet Video 2018: Dynamic Adaptive Point Cloud Streaming

Dynamic Adaptive Point Cloud Streaming

Mohammad Hosseini (University of Illinois at Urbana-Champaign (UIUC)) and Christian Timmerer (Alpen-Adria-Universität Klagenfurt, Bitmovin Inc.)

[PDF]

Abstract: High-quality point clouds have recently gained interest as an emerging form of representing immersive 3D graphics. Unfortunately, these 3D media are bulky and severely bandwidth intensive, which makes it difficult for streaming to resource-limited and mobile devices. This has called researchers to propose efficient and adaptive approaches for streaming of high-quality point clouds.

In this paper, we run a pilot study towards dynamic adaptive point cloud streaming, and extend the concept of dynamic adaptive streaming over HTTP (DASH) towards DASH-PC, a dynamic adaptive bandwidth-efficient and view-aware point cloud streaming system. DASH-PC can tackle the huge bandwidth demands of dense point cloud streaming while at the same time can semantically link to human visual acuity to maintain high visual quality when needed. In order to describe the various quality representations, we propose multiple thinning approaches to spatially sub-sample point clouds in the 3D space, and design a DASH Media Presentation Description manifest specific for point cloud streaming. Our initial evaluations show that we can achieve significant bandwidth and performance improvement on dense point cloud streaming with minor negative quality impacts compared to the baseline scenario when no adaptations is applied.