Tuesday, July 28, 2015

MPEG news: a report from the 112th meeting, Warsaw, Poland



This blog post is also available at at bitmovin tech blog and SIGMM records.

The 112th MPEG meeting in Warsaw, Poland was a special meeting for me. It was my 50th MPEG meeting which roughly accumulates to one year of MPEG meetings (i.e., one year of my life I've spend in MPEG meetings incl. traveling - scary, isn't it? ... more on this in another blog post). But what happened at this 112th MPEG meeting (my 50th meeting)...

  • Requirements: CDVA, Future of Video Coding Standardization (no acronym yet), Genome compression
  • Systems: M2TS (ISO/IEC 13818-1:2015), DASH 3rd edition, Media Orchestration (no acronym yet), TRUFFLE
  • Video/JCT-VC/JCT-3D: MPEG-4 AVC, Future Video Coding, HDR, SCC
  • Audio: 3D audio
  • 3DG: PCC, MIoT, Wearable
MPEG Friday Plenary. Photo (c) Christian Timmerer.
As usual, the official press release and other publicly available documents can be found here. Let's dig into the different subgroups:
Requirements

In requirements experts were working on the Call for Proposals (CfP) for Compact Descriptors for Video Analysis (CDVA) including an evaluation framework. The evaluation framework includes 800-1000 objects (large objects like building facades, landmarks, etc.; small(er) objects like paintings, books, statues, etc.; scenes like interior scenes, natural scenes, multi-camera shots) and the evaluation of the responses should be conducted for the 114th meeting in San Diego.

The future of video coding standardization is currently happening in MPEG and shaping the way for the successor of of the HEVC standard. The current goal is providing (native) support for scalability (more than two spatial resolutions) and 30% compression gain for some applications (requiring a limited increase in decoder complexity) but actually preferred is 50% compression gain (at a significant increase of the encoder complexity). MPEG will hold a workshop at the next meeting in Geneva discussing specific compression techniques, objective (HDR) video quality metrics, and compression technologies for specific applications (e.g., multiple-stream representations, energy-saving encoders/decoders, games, drones). The current goal is having the International Standard for this new video coding standard around 2020.

MPEG has recently started a new project referred to as Genome Compression which is about of course about the compression of genome information. A big dataset has been collected and experts working on the Call for Evidence (CfE). The plan is holding a workshop at the next MPEG meeting in Geneva regarding prospect of Genome Compression and Storage Standardization targeting users, manufactures, service providers, technologists, etc.

Systems


Summer in Warsaw. Photo (c) Christian Timmerer.
The 5th edition of the MPEG-2 Systems standard has been published as ISO/IEC 13818-1:2015 on the 1st of July 2015 and is a consolidation of the 4th edition + Amendments 1-5.

In terms of MPEG-DASH, the draft text of ISO/IEC 23009-1 3rd edition comprising 2nd edition + COR 1 + AMD 1 + AMD 2 + AMD 3 + COR 2 is available for committee internal review. The expected publication date is scheduled for, most likely, 2016. Currently, MPEG-DASH includes a lot of activity in the following areas: spatial relationship description, generalized URL parameters, authentication, access control, multiple MPDs, full duplex protocols (aka HTTP/2 etc.), advanced and generalized HTTP feedback information, and various core experiments:
  • SAND (Sever and Network Assisted DASH)
  • FDH (Full Duplex DASH)
  • SAP-Independent Segment Signaling (SISSI)
  • URI Signing for DASH
  • Content Aggregation and Playback COntrol (CAPCO)
In particular, the core experiment process is very open as most work is conducted during the Ad hoc Group (AhG) period which is discussed on the publicly available MPEG-DASH reflector.

MPEG systems recently started an activity that is related to media orchestration which applies to capture as well as consumption and concerns scenarios with multiple sensors as well as multiple rendering devices, including one-to-many and many-to-one scenarios resulting in a worthwhile, customized experience.

Finally, the systems subgroup started an exploration activity regarding real-time streaming of file (a.k.a TRUFFLE) which should perform an gap analysis leading to extensions of the MPEG Media Transport (MMT) standard. However, some experts within MPEG concluded that most/all use cases identified within this activity could be actually solved with existing technology such as DASH. Thus, this activity may still need some discussions...

Video/JCT-VC/JCT-3D

The MPEG video subgroup is working towards a new amendment for the MPEG-4 AVC standard covering resolutions up to 8K and higher frame rates for lower resolution. Interestingly, although MPEG most of the time is ahead of industry, 8K and high frame rate is already supported in browser environments (e.g., using bitdash 8K, HFR) and modern encoding platforms like bitcodin. However, it's good that we finally have means for an interoperable signaling of this profile.

In terms of future video coding standardization, the video subgroup released a call for test material. Two sets of test sequences are already available and will be investigated regarding compression until next meeting.

After a successful call for evidence for High Dynamic Range (HDR), the technical work starts in the video subgroup with the goal to develop an architecture ("H2M") as well as three core experiments (optimization without HEVC specification change, alternative reconstruction approaches, objective metrics).

The main topic of the JCT-VC was screen content coding (SCC) which came up with new coding tools that are better compressing content that is (fully or partially) computer generated leading to a significant improvement of compression, approx. or larger than 50% rate reduction for specific screen content.

Audio

The audio subgroup is mainly concentrating on 3D audio where they identified the need for intermediate bitrates between 3D audio phase 1 and 2. Currently, phase 1 identified 256, 512, 1200 kb/s whereas phase 2 focuses on 128, 96, 64, 48 kb/s. The broadcasting industry needs intermediate bitrates and, thus, phase 2 is extended to bitrates between 128 and 256 kb/s.

3DG

MPEG 3DG is working on point cloud compression (PCC) for which open source software has been identified. Additionally, there're new activity in the area of Media Internet of Things (MIoT) and wearable computing (like glasses and watches) that could lead to new standards developed within MPEG. Therefore, stay tuned on these topics as they may shape your future.

The week after the MPEG meeting I met the MPEG convenor and the JPEG convenor again during ICME2015 in Torino but that's another story...
L. Chiariglione, H. Hellwagner, T. Ebrahimi, C. Timmerer (from left to right) during ICME2015. Photo (c) T. Ebrahimi.



Thursday, March 19, 2015

MMSys 2016 - Preliminary Call for Papers


ACM Multimedia Systems 2016 (MMSys'16) [PDF]
co-located with NOSSDAV, MoVid, and MMVE

May 10-13, 2016
Klagenfurt am Wörthersee, Austria

The ACM Multimedia Systems Conference (MMSys) provides a forum for researchers to present and share their latest research findings in multimedia systems. While research about specific aspects of multimedia systems are regularly published in the various proceedings and transactions of the networking, operating system, real-time system, and database communities, MMSys aims to cut across these domains in the context of multimedia data types. This provides a unique opportunity to view the intersections and the inter-play of the various approaches and solutions developed across these domains to deal with multimedia data types. MMSys is a venue for researchers who explore:
  • Complete multimedia systems that provide a new kind of multimedia experience or systems whose overall performance improves the state-of-the-art through new research results in one of more components, or
  • Enhancements to one or more system components that provide a documented improvement over the state-of-the-art for handling continuous media or time-dependent services.
Such individual system components include:
  • Operating systems
  • Distributed architectures and protocol enhancements
  • Domain languages, development tools and abstraction layers
  • Using new architectures or computing resources for multimedia
  • New or improved I/O architectures or I/O devices, innovative uses and algorithms for their operation
  • Representation of continuous or time-dependent media
  • Metrics, measures and measurement tools to assess performance and quality of service/experience
This touches aspects of many hot topics: adaptive streaming, games, virtual environments, augmented reality, 3D video, immersive systems, telepresence, multi- and many-core, GPGPUs, mobile streaming, P2P, Clouds, cyber-physical systems. All submissions will be peer-reviewed by at least 3 members of the technical program committee. Full papers will be evaluated for their scientific quality. Accepted papers must reach a high scientific standard and document unpublished research.

Committee ACM MMSys
  • General chair: Christian Timmerer, AAU
  • TPC chair: Ali C. Begen, CISCO
  • Dataset chair: Karel Fliegel, CTU
  • Demo chairs: Omar Niamut, TNO & Michael Zink, UMass
  • Proceedings chair: Benjamin Rainer, AAU
  • Publicity chairs
    • America: Baochun Li, University of Toronto
    • Asia: Sheng-Wei Chen (a.k.a. Kuan-Ta Chen), Academia Sinica
    • Middle East: Mohamed Hefeeda, Qatar Computing Research Institute (QCRI)
    • Europe: Vincent Charvillat, IRIT-ENSEEIHT-Toulouse Univ.
  • Local chair: Laszlo Böszörmenyi, AAU
Important dates ACM MMSys
  • Submission deadline: November 27, 2015
  • Reviews available to authors: January 15, 2016
  • Rebuttal deadline:  January 22, 2016
  • Acceptance notification: January 29, 2015
  • Camera ready deadline: March 11, 2016
Committee ACM NOSSDAV (co-located with MMSys) [Prelim. CfP: PDF]
  • General chair: Hermann Hellwagner, AAU
  • TPC chair: Eckehard Steinbach, TUM
Important dates ACM NOSSDAV
  • Submission deadline: February 5, 2016
  • Acceptance notification: March 23, 2016
  • Camera ready deadline: April 8, 2016
Committees ACM MMVE (co-located with MMSys) [Prelim. CfP: PDF]
  • General chair: Jean Botev, Univ. of Luxembourg
Important dates ACM MMVE
  • Submission deadline: February 5, 2016
  • Acceptance notification: March 23, 2016
  • Camera ready deadline: April 8, 2016
Committees ACM MoVid (co-located with MMSys)
  • TPC chair: Pål Halvorsen, Simula/Univ. Oslo
  • TPC co-chair: Qi Han, Colorado School of Mines
Important dates ACM MoVid
  • Submission deadline: tbd
  • Acceptance notification: tbd
  • Camera ready deadline: tbd
Local organisation
  • Chair: Laszlo Böszörmenyi
  • Alpen-Adria-Universität Klagenfurt (AAU)
  • Institute of Information Technology (ITEC)
  • Universitätsstraße 65-67, A-9020 Klagenfurt
  • Email: mmsys2016@itec.aau.at

Wednesday, March 18, 2015

MPEG news: a report from the 111th meeting, Geneva, Switzerland

MPEG111 opening plenary.
This blog post is also available at SIGMM records.

The 111th MPEG meeting (note: link includes press release and all publicly available output documents) was held in Geneva, Switzerland showing up some interesting aspects which I’d like to highlight here. Undoubtedly, it was the shortest meeting I’ve ever attended (and my first meeting was #61) as final plenary concluded at 2015/02/20T18:18!

In terms of the requirements (subgroup) it’s worth to mention the call for evidence (CfE) for high-dynamic range (HDR) and wide color gamut (WCG) video coding which comprises a first milestone towards a new video coding format. The purpose of this CfE is to explore whether or not (a) the coding efficiency and/or (b) the functionality of the HEVC Main 10 and Scalable Main 10 profiles can be significantly improved for HDR and WCG content. In addition to that requirements issues a draft call for evidence on free viewpoint TV. Both documents are publicly available here.

The video subgroup continued discussions related to the future of video coding standardisation and issued a public document requesting contributions on “future video compression technology”. Interesting application requirements come from over-the-top streaming use cases which request HDR and WCG as well as video over cellular networks. Well, at least the former is something to be covered by the CfE mentioned above. Furthermore, features like scalability and perceptual quality is something that should be considered from ground-up and not (only) as an extension. Yes, scalability is something that really helps a lot in OTT streaming starting from easier content management, cache-efficient delivery, and it allows for a more aggressive buffer modelling and, thus, adaptation logic within the client enabling better Quality of Experience (QoE) for the end user. It seems like complexity (at the encoder) is not such much a concern as long as it scales with cloud deployments such as http://www.bitcodin.com/ (e.g., the bitdash demo area shows some neat 4K/8K/HFR DASH demos which have been encoded with bitcodin). Closely related to 8K, there’s a new AVC amendment coming up covering 8K although one can do it already today (see before) but it’s good to have standards support for this. For HEVC, the JCT-3D/VC issued the FDAM4 for 3D Video Extensions and started with PDAM5 for Screen Content Coding Extensions (both documents being publicly available after an editing period of about a month).

And what about audio, the audio subgroup has decided that ISO/IEC DIS 23008-3 3D Audio shall be promoted directly to IS which means that the DIS was already at such a good state that only editorial comments are applied which actually saves a balloting cycle. We have to congratulate the audio subgroup for this remarkable milestone.

Finally, I’d like to discuss a few topics related to DASH which is progressing towards its 3rd edition which will incorporate amendment 2 (Spatial Relationship Description, Generalized URL parameters and other extensions), amendment 3 (Authentication, Access Control and multiple MPDs), and everything else that will be incorporated within this year, like some aspects documented in the technologies under consideration or currently being discussed within the core experiments (CE).
Currently, MPEG-DASH conducts 5 core experiments:
  • Server and Network Assisted DASH (SAND)
  • DASH over Full Duplex HTTP-based Protocols (FDH)
  • URI Signing for DASH (CE-USD)
  • SAP-Independent Segment SIgnaling (SISSI)
  • Content aggregation and playback control (CAPCO)
The description of core experiments is publicly available and, compared to the previous meeting, we have a new CE which is about content aggregation and playback control (CAPCO) which "explores solutions for aggregation of DASH content from multiple live and on-demand origin servers, addressing applications such as creating customized on-demand and live programs/channels from multiple origin servers per client, targeted preroll ad insertion in live programs and also limiting playback by client such as no-skip or no fast forward.” This process is quite open and anybody can join by subscribing to the email reflector.

The CE for DASH over Full Duplex HTTP-based Protocols (FDH) is becoming major and basically defines the usage of DASH for push-features of WebSockets and HTTP/2. At this meeting MPEG issues a working draft and also the CE on Server and Network Assisted DASH (SAND) got its own part 5 where it goes to CD but documents are not publicly available. However, I'm pretty sure I can report more on this next time, so stay tuned or feel free to comment here.

Friday, February 27, 2015

IEEE JSAC Special Issue: Video Distribution over Future Internet


Special issue on Video Distribution over Future Internet 

Extended Submission Deadline: May 1529, 2015


The current Internet is under tremendous pressure due to the exponential growth in bandwidth demand, fueled by the transfer of video consumption to online distribution, IPTV, streaming services such as Netflix, and from phone networks to videoconferencing and Skype-like video communications. The Internet has also democratized the creation, distribution and sharing of user-generated video contents through services such as YouTube, Vimeo or Hulu. The situation is further aggravated by the emerging trends of adopting higher definition video streams, requesting more and more bandwidth. Indeed, the Cisco Visual Networking Index (VNI) projects that video consumption will amount to 90% of the global consumer traffic by 2017. Another shift predicted by Cisco VNI is that most data communications will be wireless by 2018.

To cope with the bandwidth growth, the shift to wireless, and to solve other related issues (e.g., naming, security, etc) with the current Internet, new architectures for the future Internet have been proposed and prototyped. Examples include Content-Centric Networks (CCN) or Named Data Networking (NDN), or some content-based extensions to Software-Defined Networking (SDN), among others. None of these emerging architectures deals specifically with video distribution, as they need to support a wider range of services, but all would have to support videos in an efficient manner. Therefore, the study of video distribution over the future Internet is of primary importance: how well does future Internet architecture facilitate video delivery? What kind of video distribution mechanisms need to be created to run on the future Internet? How will video be supported in the wireless portion of the future Internet? Can the current video distribution mechanisms (such as end-to-end dynamic rate adaptation schemes) be used or even enhanced for the future Internet? What are subjective/objective metrics for performance measurement? How to provide real-time guarantees for live and interactive video streams?

While the topic is quite wide, we will narrow the focus of this special issue on the fundamental problems of video distribution and delivery in the future Internet. We invite submissions of high-quality original technical and survey papers, which have not been published previously, on video distribution in the future Internet, including the following non-exhaustive list of topics. Please note that all topics must be understood in the context of the future Internet as outlined above.
  • Network-assisted video distribution, network support for multimedia, specifically supporting wireless environments
  • New information-centric and software-defined architectures to support wired and wireless video streaming
  • Resource allocation for wired and wireless video distribution
  • Media streaming, distribution, and storage support in the future Internet
  • In-network caching/storage, named data retrieval, publish/subscribe for video distribution in wired and wireless networks
  • Next generation Content Delivery Networks (CDN)
  • Adaptive streaming and rate adaptation for video streaming in the future Internet for wired and wireless networks
  • Peer-to-peer aspects of video multimedia distribution, including scaling and capacity
  • QoS/QoE measurement and support for video distribution in the future Internet
  • User-generated content and social networks for multi-media
  • Video compression techniques explicitly supporting the future Internet
  • Big-Data mechanisms (say referral engines or content placement algorithms) for video content over future Internet
  • Social-aware video content distribution over future Internet
  • Integration of video distribution and multimedia computing over future Internet
  • Testbeds and measurements of video distribution over future Internet
  • Cost and economic models for video distribution over future Internet
  • Theoretical foundations for video distribution over future Internet, e.g., network coding, information theory, machine learning, etc
Special Issue Editors
  • Prof. Cedric Westphal, Huawei Innovations & UCSC, USA 
  • Prof. Tommaso Melodia, Northeastern University, Boston, MA, USA 
  • Prof. Christian Timmerer, Alpen-Adria-Universität Klagenfurt, Austria
  • Prof. Wenwu Zhu, Tsinghua University, Beijing, China
Important Dates
  • Paper Submission due: 05/29/2015
  • First review complete: 09/15/2015
  • Acceptance Notification: 11/15/2015
  • Camera-ready version: 12/15/2015
  • Publication date: Second Quarter 2016 
Manuscript submissions and reviewing process: All submissions must be original work that has not been published or submitted elsewhere. For submission format, please follow IEEE JSAC guidelines (http://www.comsoc.org/jsac/paper-submission-guidelines). Each paper will go through a two-round rigorous reviewing process by at least three leading experts in related areas. Papers should be submitted through EDAS (https://edas.info/newPaper.php?c=19291).

ICME 2015: Over-the-Top Content Delivery: State of the Art and Challenges Ahead

Supported by http://www.dash-player.com/
Tutorial at ICME 2015
June 29 - July 3, 2015
Torino, Italy

Abstract: Over-the-top content delivery is becoming increasingly attractive for both live and on-demand content thanks to the popularity of platforms like YouTube, Vimeo, Netflix, Hulu, Maxdome, etc. In this tutorial, we present state of the art and challenges ahead in over-the-top content delivery. In particular, the goal of this tutorial is to provide an overview of adaptive media delivery, specifically in the context of HTTP adaptive streaming (HAS) including the recently ratified MPEG-DASH standard. The main focus of the tutorial will be on the common problems in HAS deployments such as client design, QoE optimization, multi-screen and hybrid delivery scenarios, and synchronization issues. For each problem, we will examine proposed solutions along with their pros and cons. In the last part of the tutorial, we will look into the open issues and review the work-in-progress and future research directions.

The tutorial will be held on June 29, 2015 in the afternoon.

Slides will be provided on time and a preliminary version (from previous presentations) can be found here and here.

Biography of Presenters

Christian Timmerer received his M.Sc. (Dipl.-Ing.) in January 2003 and his Ph.D. (Dr.techn.) in June 2006 (for research on the adaptation of scalable multimedia content in streaming and constraint environments) both from the Alpen-Adria-Universität Klagenfurt. He is currently an Associate Professor at the Institute of Information Technology (ITEC) within the Multimedia Communication Group. His research interests include immersive multimedia communication, streaming, adaptation, Quality of Experience, and Sensory Experience.

He has published more than 150 papers in these areas and he has organized a number of special sessions and issues in this domain, e.g., “Special Session on MMT/DASH” (MMsys 2011, followed by a special issue in Signal Processing: Image Communication, 2012), “Special Issue on Adaptive Media Streaming” (IEEE JSAC, published 2014). Furthermore, he was the general chair of WIAMIS 2008, QoMEX 2013, and QCMan 2014; will be general chair of ACM Multimedia Systems 2016. He is an editorial board member of IEEE Computer, associate editor for IEEE Transactions on Multimedia, area editor for the Elsevier journal on Signal Processing: Image Communication and a key member of the Interest Groups (IG) on Image and Video Coding as well as Quality of Experience and Director of the Review Board of the IEEE Multimedia Communication Technical Committee. Finally, he writes a regular column for ACM SIGMM Records where he serves as an editor and he is a member of the ACM SIGMM Open Source Software Committee. Dr. Timmerer participated in the work of ISO/MPEG for more then 10 years, notably as the head of the Austrian delegation, coordinator of several core experiments, co-chair of several ad- hoc groups, and as an editor for various standards, notably the MPEG-21 Multimedia Framework and the MPEG Extensible Middleware (MXM which became MPEG-M). His current contributions are in the area of MPEG-V (Media Context and Control) and Dynamic Adaptive Streaming over HTTP (DASH), for which he also serves as an editor. He received various ISO/IEC certificates of appreciation.


Ali C. Begen is with the Video and Content Platforms Research and Advanced Development Group at Cisco. His interests include networked entertainment, Internet multimedia, transport protocols and content delivery. Ali is currently working on architectures and protocols for next-generation video transport and distribution over IP networks, and he is an active contributor in the IETF and MPEG in these areas. Ali holds a Ph.D. degree in electrical and computer engineering from Georgia Tech. He received the Best Student-paper Award at IEEE ICIP 2003, the Most-cited Paper Award from Elsevier Signal Processing: Image Communication in 2008, and the Best-paper Award at Packet Video Workshop 2012. Ali has been an editor for the Consumer Communications and Networking series in the IEEE Communications Magazine since 2011 and an associate editor for the IEEE Transactions on Multimedia since 2013. He is a senior member of the IEEE and a senior member of the ACM. Further information on Ali’s projects, publications and presentations can be found at http://ali.begen.net.

Tuesday, February 10, 2015

Multimedia Streaming in Information-Centric Networks (MuSIC)

Call for Papers

2015 IEEE ICME Workshop
Multimedia Streaming in Information-Centric Networks (MuSIC)
Friday, July 3, 2015, Torino, Italy


Motivation and Goals

According to the Cisco Visual Networking Index and to Sandvine Global Internet Phenomena Reports, multimedia, in particular video for real-time entertainment, are the predominant sources of traffic on the current Internet and continue to grow. However, the Internet protocols and mechanisms have not at all been designed for the challenging real-time communication media like video and voice streaming and conferencing, such that îthe Internet only just works,î as Mark Handley put it. Intense research on Quality of Service (QoS) schemes and frameworks has been conducted over the past decades, not resulting in practical and widely accepted mechanisms in the IP networking world. Currently, Content Delivery Networks (CDNs) are the primary means to deliver massive amounts of real-time content, e.g., video streams, to clients in a satisfying manner.

Countering these problems and challenges, many Future Internet initiatives and projects have been and are being undertaken around the globe. Among them, Information-Centric Networking (ICN) is a promising approach, bringing content and efficient content distribution into focus. Several basic ICN concepts are quite similar to application-layer protocols in the IP world, e.g., a publish-subscribe approach in PSIRP/PURSUIT, pull-based data transport in CCN/NDN (interest/data packets) and in Adaptive HTTP Streaming approaches (request/response behavior).

Interestingly, though, the two communities, on Multimedia Systems/Communications and on Information-Centric Networking, have barely interacted. Multimedia communications researchers still mostly think and operate in the context of IP networks, while ICN researchers mainly discuss key networking aspects, not focusing on the requirements, challenges and opportunities of real-time multimedia data delivery/streaming (even though there are notable exceptions). Yet, recent intense discussions on the IRTF mailing list on video delivery and QoS/QoE and several publications (among them, an Internet Draft) indicate increased interest of ICN experts in multimedia communication.
The most important goal of this workshop is therefore to provide a forum that brings those two communities together, to spawn vivid discussions and intense exchange and learnings at the intersection of the two areas, and to help establish common terminology, work, and projects. The committees of the workshop are composed of leading members of both communities, in an attempt to solicit broad interest and good submissions to the workshop.

The workshop will emphasize video-on-demand (VoD) and voice/video conferencing (live) applications on ICNs, but other distributed multimedia applications are welcome, such as gaming. All aspects of media streaming in ICN will be addressed, including: basic principles and insights; protocols, mechanisms and policies (strategies) in ICN nodes; routing; measures and metrics for real-time behavior, QoS and QoE; evaluation methodology; prototype implementations, testbeds, and demos; and comparisons with IP-based systems. The workshop is open to discuss media streaming in all ICN approaches; comparisons of different ICN architectures are encouraged. Demos are welcome.

Topics of Interest (including, but not limited to)

  • Video-on-demand applications, prototypes, and demos over ICN
  • Voice/video conferencing applications, prototypes, and demos over ICN
  • Novel multimedia applications, prototypes, demos over ICN
  • Error and loss control and mitigation
  • Congestion detection and control
  • Naming and routing of media streams
  • Forwarding, aggregation, replication strategies (interests and content)
  • Caching strategies
  • Caching effects (probably unexpected and/or undesired)
  • DRM and its impact on or interplay with caching
  • Content adaptation in ICN
  • Media stream adaptation, bandwidth estimation,... on clients
  • Use of scalable media content
  • Fairness issues and metrics in ICN
  • Security and privacy issues for MM streaming over ICN
  • QoS and QoE mechanisms and metrics: impact on and interplay with ICN
  • Evaluation methodologies, in particular ICN simulation and experimental testbeds
  • Deployment and scalability issues

Submissions to the Workshop

  • Paper length: Prospective authors are invited to submit full-length papers, up to 6 pages long, by March 30, 2015.
  • Paper format: For author guidelines†and†paper templates please see: http://www.icme2015.ieee-icme.org/authorguide.php.
  • Paper submission: All submissions are to be made via CMT web site at:†https://cmt.research.microsoft.com/ICMEW2015. Please select "Workshop on Multimedia Streaming in Information-Centric Networks (MuSIC)".
  • Review process: Each submission will be peer-reviewed by at least three members of the TPC.
  • Accepted papers: Papers accepted for the workshop must be presented by one of the authors. Papers will be published in the Proceedings of ICME Workshops and also on-line in the IEEE Xplore digital library.

Important Dates

  • Paper submission:   March 30, 2015
  • Paper acceptance:   April 30, 2015
  • Camera-ready paper: May 15, 2015
  • Workshop:           July 3, 2015

Committees

Organizers and Technical Program Committee Chairs
-------------------------------------------------
- Hermann Hellwagner, Klagenfurt University, Austria
- George C. Polyzos, AUEB, Greece

Steering Committee
------------------
- Klara Nahrstedt, UIUC, USA
- George Pavlou, University College London, UK
- Cedric Westphal, Huawei, USA
- Chang Wen Chen, SUNY at Buffalo, USA

Technical Program Committee
---------------------------
- Alexander Afanasyev, UCLA, USA
- Ali Begen, Cisco, Canada
- Laszlo Bˆszˆrmenyi, Klagenfurt University, Austria
- Jeff Burke, UCLA, USA
- Giovanna Carofiglio, Cisco Systems, France
- Wei Koong Chai, University College London, UK
- Wolfgang Effelsberg, Univ. Mannheim & TU Darmstadt, Germany
- Abdulmotaleb El Saddik, University of Ottawa, Canada
- Pascal Frossard, EPFL, Switzerland
- Carsten Griwodz, Simula Research Lab & Univ.of Oslo, Norway
- Mohamed Hefeeda, Simon Fraser University, Canada
- Dirk Kutscher, NEC Labs Europe, Germany
- Giannis Marias, AUEB, Greece
- Luca Muscariello, Orange Labs, France
- Klara Nahrstedt, UIUC, USA
- Bˆrje Ohlman, Ericsson Research, Sweden
- Wei Tsang Ooi, National University of Singapore
- Dave Oran, Cisco, USA
- Jˆrg Ott, Aalto University, Finland
- Christos Papadopoulos, Colorado State University, USA
- Benjamin Rainer, Klagenfurt University, Austria
- Damien Saucez, INRIA, France
- Gwendal Simon, Telecom Bretagne, France
- Vasilios Siris, AUEB, Greece
- Ignacio Solis, PARC, USA
- Ralf Steinmetz, TU Darmstadt, Germany
- Christian Timmerer, Klagenfurt University, Austria
- Dirk Trossen, InterDigital, UK
- Laura Toni, EPFL, Switzerland
- Christian Tschudin, Universit‰t Basel, Switzerland
- George Xylomenos, AUEB, Greece
- Yonggang Wen, Nanyang Technological University, Singapore
- Roger Zimmermann, National University of Singapore

Monday, January 12, 2015

MPEG news: a report from the 110th meeting, Strasbourg, France

This blog post is also available at SIGMM records.

The 110th MPEG meeting was held at the Strasbourg Convention and Conference Centre featuring the following highlights:

  • The future of video coding standardization
  • Workshop on media synchronization
  • Standards at FDIS: Green Metadata and CDVS
  • What's happening in MPEG-DASH?
Additional details about MPEG's 110th meeting can be also found here including the official press release and all publicly available documents.

The Future of Video Coding Standardization

MPEG110 hosted a panel discussion about the future of video coding standardization. The panel was organized jointly by MPEG and ITU-T SG 16's VCEG featuring Roger Bolton (Ericsson), Harald Alvestrand (Google), Zhong Luo (Huawei), Anne Aaron (Netflix), Stéphane Pateux (Orange), Paul Torres (Qualcomm), and JeongHoon Park (Samsung).

As expected, "maximizing compression efficiency remains a fundamental need" and as usual, MPEG will study "future application requirements, and the availability of technology developments to fulfill these requirements". Therefore, two Ad-hoc Groups (AhGs) have been established which are open to the public:
The presentations of the brainstorming session on the future of video coding standardization can be found here.

Workshop on Media Synchronization

MPEG101 also hosted a workshop on media synchronization for hybrid delivery (broadband-broadcast) featuring six presentations "to better understand the current state-of-the-art for media synchronization and identify further needs of the industry".
  • An overview of MPEG systems technologies providing advanced media synchronization, Youngkwon Lim, Samsung
  • Hybrid Broadcast - Overview of DVB TM-Companion Screens and Streams specification, Oskar van Deventer, TNO
  • Hybrid Broadcast-Broadband distribution for new video services :  a use cases perspective, Raoul Monnier, Thomson Video Networks
  • HEVC and Layered HEVC for UHD deployments, Ye Kui Wang, Qualcomm
  • A fingerprinting-based audio synchronization technology, Masayuki Nishiguchi, Sony Corporation
  • Media Orchestration from Capture to Consumption, Rob Koenen, TNO
The presentation material is available here. Additionally, MPEG established an AhG on timeline alignment (that's how the project is internally called) to study use cases and solicit contributions on gap analysis and also technical contributions [email][subscription].

Standards at FDIS: Green Metadata and CDVS

My first report on MPEG Compact Descriptors for Visual Search (CDVS) dates back to July 2011 which provides details about the call for proposals. Now, finally, the FDIS has been approved during the 110th MPEG meeting. CDVS defines a compact image description that facilitates the comparison and search of pictures that include similar content, e.g. when showing the same objects in different scenes from different viewpoints. The compression of key point descriptors not only increases compactness, but also significantly speeds up, when compared to a raw representation of the same underlying features, the search and classification of images within large image databases. Application of CDVS for real-time object identification, e.g. in computer vision and other applications, is envisaged as well.

Another standard reached FDIS status entitled Green Metadata (first reported in August 2012). This standard specifies the format of metadata that can be used to reduce energy consumption from the encoding, decoding, and presentation of media content, while simultaneously controlling or avoiding degradation in the Quality of Experience (QoE). Moreover, the metadata specified in this standard can facilitate a trade-off between energy consumption and QoE. MPEG is also working on amendments to the ubiquitous MPEG-2 TS ISO/IEC 13818-1 and ISOBMFF ISO/IEC 14496-12 so that green metadata can be delivered by these formats.

What's happening in MPEG-DASH?

MPEG-DASH is in a kind of maintenance mode but still receiving new proposals in the area of SAND parameters and some core experiments are going on. Also, the DASH-IF is working towards new interoperability points and test vectors in preparation of actual deployments. When speaking about deployments, they are happening, e.g., a 40h live stream right before Christmas (by bitmovin, a top-100 company that matters most in online video). Additionally, VideoNext was co-located with CoNEXT'14 targeting scientific presentations about the design, quality and deployment of adaptive video streaming. Webex recordings of the talks are available here. In terms of standardization, MPEG-DASH is progressing towards the 2nd amendment including spatial relationship description (SRD), generalized URL parameters and other extensions. In particular, SRD will enable new use cases which can be only addressed using MPEG-DASH and the FDIS is scheduled for the next meeting which will be in Geneva, Feb 16-20, 2015. I'll report on this within my next blog post, stay tuned..