Wednesday, June 24, 2009

Workshop on Modern Media Transport (MMT) - Program

The program is now available for the MPEG Workshop on Modern Media Transport (MMT).

14:00 ~ 15:30 Session I. Experiences (chair : Young-Kwon Lim, net&tv Inc)

Use of MPEG-2 Transport in Broadcast and other applications – Challenges to be met by MMT
Sam Narasimhan (Motorola)

This short presentation provides an overview of MPEG-2 transport standard which was developed over 15 years ago and its (continued) use in majority of Broadcast, DVD and IP based applications. In addition to providing a robust transport mechanism for carriage of various codec’s developed by MPEG and other standards bodies, MPEG-2 transport is also used as a foundation for specifications related to physical layer of networks (FEC) and for conditional access (CA). Cable modem standards use MPEG-2 TS for transmission of IP data while DVD specifications use the program stream part of MPEG-2 systems for content coding. With the explicit mechanisms for audio/video synchronization in MPEG-2 TS, majority of the IPTV applications continue to use MPEG-2 TS as the underlying transport layer below IP protocols. The presentation will cover some of these application examples. The presentation will list a set of requirements for MMT so that it can provide the functionalities of MPEG-2 TS (that may still be required in future) and include additional functionalities to overcome some issues we are currently seeing with MPEG-2 transport (that need to be addressed in a new standard).

DVB experiences and related standards on using MPEG transport mechanisms
Alexander Adolf and Thomas Stockhammer (DVB)

This presentation will introduce various experiences of defining and using application standards based on MPEG transport mechanism including
  • Download and random access of MP4 files
  • MPEG TS Transport between heterogeneous network
  • Cross-layer designs to improve the Quality of Service/Experience (QoS/QoE)
  • Context- and Content-Aware Networks
  • Internet TV Content Delivery
15:30 ~ 16:00 Coffee Break

16:00 ~ 17:30 Session II. Challenges (chair : Jörn Ostermann, University of Hannover)

Fully Interoperable Streaming of Media Resources in Heterogeneous Environments
Michael Eberhard, Christian Timmerer, and Hermann Hellwagner

This paper presents an interoperable multimedia delivery framework for (scalable) media resources based on various MPEG standards and IETF Requests for Comments (RFC). It can be used to transmit (scalable) media resources within heterogeneous usage environments where the properties of the usage environment (e.g., terminal/network capabilities) may change dynamically during the streaming session. The usage environment properties are signaled by interoperable description formats provided by the MPEG-21 Digital Item Adaptation (DIA) standard and encapsulated within the MPEG Extensible Middleware’s (MXM) request content protocol. Furthermore, the available media resources are queried by means of the MPEG Query Format (MPQF). Additionally, the actual adaptation and delivery of the content is done by exploiting a state-of-the-art multimedia framework such as that provided by VideoLAN Client (VLC).
http://www-itec.uni-klu.ac.at/~m1eberha/demo

Media-Aware Network Elements on Legacy Devices
Ingo Kofler, Robert Kuschnig, and Hermann Hellwagner

Recent advances in video coding technology like the scalable extension of the MPEG-4 AVC/H.264 video coding standard (H.264/SVC) pave the way for computationally cheap adaptation of video content. In the course of our research we developed a lightweight RTSP/RTP proxy that enables in-network stream processing. Based on an off-the-shelf wireless router (Linksys WRT 54 GL Broadband Router) that runs a Linux-based firmware we demonstrate that the video adaptation can be performed on-the-fly directly on a network device. By utilizing the RTP packetization of the video stream the proxy can adapt the video in the spatial, temporal and SNR domains. The proxy was developed from scratch in ANSI C and was deployed on the router by using the popular openWrt distribution.
http://www-itec.uni-klu.ac.at/~inkofler/demo/

Harmonization with the current QoS protocols for MMT
Doug Young Suh, Jin Woo Hong

This presentation describes how MMT will harmonize the MPEG tools with the QoS protocols of IETF, 3GPP, and IEEE802 series. Such harmonization will enable to exploit various useful tools developed by the related standard development organizations.

Predictable Loss and Predictable Delay for IP media services
Prof. Dr.-Ing. Thorsten Herfet; M.Sc. Manuel Gorius

Internet Protocol based infrastructures become increasingly important for the distribution of digital broadcast media. Unfortunately, available transport protocols do not meet the requirements of such media either concerning the timeliness, the reliability, or the transmission overhead. Of course, HTTP over TCP is currently the prevalent configuration for audiovisual streaming in the Internet as it provides a convenient solution with end-to-end reliability and NAT traversal. However, the protocol is neither suitable for real-time transmission due to its flow control nor does it provide the scalability for large broadcast scenarios. Therefore, current IPTV services as well as IP based mobile broadcast solutions such as 3GPP streaming are based on UDP, usually extended by RTP. Even though multicast is still an open issue on the Internet, this protocol combination at least provides the essential scalability. Nevertheless, as soon as it comes to wireless transmission (802.11, WiMAX, 3GPP), the lack of reliability seriously affects the rendering quality at the receiver since the services suffer from packet loss rates of several percent.
We chose an Adaptive Hybrid Error Correction (AHEC) approach as a basis for our media oriented transport architecture. This highly flexible composition of NACK based ARQ and adaptive packet-level FEC leads to near-optimal coding efficiency as it is controlled by analytical parameter derivation based on a statistical channel prediction model. The ability to fit to certain delay and reliability constraints even allows the parameter optimization beyond the end-to-end connection granularity: Wired and wireless networks usually significantly differ in terms of packet loss. On the other hand, home network segments provide a much lower round trip delay than IP based delivery networks. Obviously, pure end-to-end error correction schemes are not efficient in such heterogeneous network environments. Therefore, our AHEC scheme offers a link-level operation mode which relieves reliable links from the redundancy required for more unreliable links.
http://www.nt.uni-saarland.de/publications

Tuesday, June 23, 2009

Open Source Scalable Video Coding (SVC) Software

After DVB and ATSC announced to consider Scalable Video Coding (SVC) within their standards, I thought it would be interesting to blog about SVC software that is publicly available, especially because SVC has found its way already into video conferencing products (e.g., Vidyo, RADVISION, GIPS, SPIRIT DSP). Currently, I'm aware of the following open source SVC software:
  • SVC Reference Software (JSVM software) which focuses on functionality rather than performance. Most of the people use this for research purposes as the reference codec.
  • The P2P-Next consortium provides its SVC software (encoder/decoder) as open source under LGPL which comprises an optimized version of the JSVM for both encoding and decoding.
  • The Open SVC Decoder has been released recently - also under LGPL - and provides an alternative to the above mentioned implementations. Interestingly, it provides an integration for the Core Pocket Media Player (TCPMP) and Mplayer. Further information can be found on their Wiki.
In case you think I missed anything, don't hesitate to comment...

Sunday, June 21, 2009

Why do we need a Content‐Centric Future Internet?

The EC published a position paper from the Future Content Network which comprises a proposals towards content‐centric Internet architectures. Here's the executive summary...

Executive Summary
The aim of this document is twofold: firstly, to report and analyse the main reasons, which support our claim that the Future Internet will be “Content‐Centric” and secondly to define two alternative solutions for a Future Content‐Centric Internet Architecture following an evolutionary and a clean‐slate approach.

The starting point of our discussion is the reasonable hypothesis that Future Internet will mainly simplify the usability, increase the efficiency, secure the privacy and enhance the media experience of the users (enhanced mobility, really broadband & flexible communications, immersion, enhanced interaction, involvement of all senses and emotions, navigation). New ways of media creation and consumption will emerge, aiming to cover the different human needs and preserve the revenue generation of the various stakeholders. Moreover, new content types will appear, which together with efficient handling, delivery and protection of the content (i.e. static or dynamic, pre‐recorded, cached or live) will be the Future Internet cornerstones. Thus, the content/media and its efficient handling are (in) the heart of the Future Internet.

Taking into account the fact that the current Internet cannot efficiently serve the increasing needs and the foreseen requirements, two Content‐Centric Internet Architectures are proposed: a “Logical Content‐Centric Architecture”, which consists of different virtual hierarchies of nodes with different functionality and an “Autonomic Content‐Centric‐Internet Architecture”, which relies on the completely novel concept of the “content object”.

Yet, the major objective of this position paper is to initiate a debate between all the interested stakeholders with respect to the following three fundamental arguments:
  1. Will the Future Internet be Content‐Centric?
  2. How a potential Content‐Centric Internet Architecture would look like?
  3. Which design principles and requirements would govern such Architecture?
Interesting to see this "content object" concept as it seems to borrow a lot from the MPEG-21 Multimedia Framework which aims to enable the transaction of Digital Items among Users. It would be very interesting seeing some of the MPEG-21 concepts being adopted into the Content-Centric Future Internet!

Friday, June 19, 2009

First Draft Published for Ontology for Media Resource 1.0

The Media Annotations Working Group has published the First Public Working Draft of Ontology for Media Resource 1.0. This specification defines an ontology for cross-community data integration of information related to media resources, with a particular focus on media resources on the Web. The ontology is supposed to foster interoperability and counter the current proliferation of video metadata formats by providing full or partial translation and mapping towards existing formats. Learn more about the Video in the Web Activity.

Tuesday, June 9, 2009

IWQOS 2009 Call for Participation

CALL FOR PARTICIPATION

IEEE IWQoS 2009
17th IEEE International Workshop on Quality of Service
July 13-15, 2009
Charleston, South Carolina
http://www.ieee-iwqos.org/

You are cordially invited to participate in the upcoming 17th IEEE International Workshop on Quality of Service (IEEE IWQoS 2009) sponsored by the IEEE Communications Society.

IWQoS 2009 offers 9 exciting technical sessions with 28 regular papers and 10 short papers. It also features two keynote talks by Professor Peter Steenkiste (Carnegie Mellon University) and Professor Roch Guerin (University of Pennsylvania). For more details, please visit http://www.ieee-iwqos.org/program.php.

Please note that the early registration deadline is June 26 and the deadline for a reduced room rate at the Charleston Place Hotel is June 13.

We look forward to welcoming you in Charleston, SC.

Friday, June 5, 2009

Preparation of Call for Evidence in London

Just received this via the AhG on High-Performance Video Coding and so far they have
received 12 expressions of interest to participate in the Call for Evidence. They will have an AhG meeting on Sat-Sun prior to the next meeting where they will start reviewing the inputs, and in parallel start the real evaluation viewing with small groups of experts.

To subscribe or unsubscribe for this AhG, go to http://mailman.rwth-aachen.de/mailman/listinfo/mpeg-newvid.

Thursday, June 4, 2009

Promising avenues for interdisciplinary research in vision

Dr. Oge Marques, Florida Atlantic University

25. Juni 2009, 10:30 Uhr,

E.2.42, Universität Klagenfurt

Research in vision science has an intrinsic potential for integrating contributions from psychology, computer science, engineering, optics, neuroscience, and physiology, among many other areas of knowledge. During the past 15 years, many vision researchers have successfully demonstrated that the results of such interdisciplinary efforts can advance the state of the art and lead to promising discoveries.
This talk presents representative research results that blend experiments in human visual perception and computer vision models to solve challenging vision problems. Particularly, it discusses the issues of object and scene recognition and the role of context and shows how they are being addressed by the leading researchers in the field.
After introducing selected basic concepts of object detection and recognition, scene recognition and analysis, and the role of context, we will discuss representative attempts to model the process of context influences in object perception. We will then motivate further research efforts by presenting a number of fascinating open problems in this field and suggesting how they can be approached in a truly interdisciplinary way.

Dr. Oge Marques is an Associate Professor in the Department of Computer Science and Engineering at Florida Atlantic University in Boca Raton, Florida. He is currently a guest professor with ITEC at University of Klagenfurt. He received his Ph.D. in Computer Engineering from Florida Atlantic University in 2001, his Masters in Electronics Engineering from Philips International Institute (Eindhoven, NL) in 1989 and his Bachelor's Degree in Electrical Engineering from UTFPR (Curitiba, Brazil), where he also taught for more than 10 years before moving to the USA.
His research interests include: image processing, analysis, annotation, search, and retrieval; human and computer vision; and video processing and analysis. He has more than 20 years of teaching and research experience in the fields of image processing and computer vision, in different countries (USA, Austria, Brazil, Netherlands, Spain, and India) and capacities. He is the (co-) author of 4 (four) books in these topics, including the forthcoming textbook “Practical Image and Video Processing Using MATLAB” (Wiley, 2010). He has also published several book chapters and more than 50 refereed journal and conference papers in these fields. He serves as a reviewer and Editorial Board member for several leading journals in computer science and engineering. He is a member of ACM, IEEE, IEEE Computer Society, IEEE Education Society, and the honor societies of Phi Kappa Phi and Upsilon Pi Epsilon.

Use Cases and Requirements for Ontology and API for Media Object 1.0 Draft Published

The Media Annotations Working Group has published a Working Draft of Use Cases and Requirements for Ontology and API for Media Object 1.0. This document specifies use cases and requirements as an input for the development of the "Ontology for Media Object 1.0" and the "API for Media Object 1.0". The ontology will be a simple ontology to support cross-community data integration of information related to media objects on the Web. The API will provide read access and potentially write access to media objects, relying on the definitions from the ontology. Learn more about the Video in the Web Activity.

This is the second version of this working draft and from the
"Purpose of the Ontology and the API" section I've extracted the following:

The ontology will define mappings from properties in formats to a common set of properties. The API then will define methods to access heterogeneous metadata, using such mappings. An example: the property createDate from XMP can be mapped to the property DateCreated from IPTC. The API will then define a method getCreateDate that will return values either from XMP or IPTC metadata.

An important aspect of the above figure is that everything visualized above the API is left to applications. For example.

  • languages for simple or complex queries

  • analysis of user preferences (like "preferring movies with actor X and suitable for children")

  • other mechanisms for accessing metadata

The ontology and the API provide merely a basic, simple means of interoperability for such applications.

Wednesday, June 3, 2009

Peer-to-peer for content delivery for IPTV services: analysis of mechanisms and NGN impacts

Just found this document here which is a (first) draft of ETSI TISPAN's work in the area of P2P television. It seems that P2P is becoming a major issue for standardization:
  • IETF has mainly ALTO and P2PSIP working groups but also P2P streaming is pushing "on the market."
  • DVB recently released an Internet TV questionnaire which has some P2P aspects.
  • W3C had some P2P activity in the past but now it seems to be silent with respect to this topic. Maybe the situation changes in the future...
  • MPEG provides the baseline technologies (codecs and file formats) but also started an exploration activity towards an Advanced IPTV Terminal (AIT), probably jointly with ITU-T.
The future will show us which of these activites will be successful and I'd like to close with a quote that fits here very well, i.e., "The nice thing about standards is that you have so many to choose from". --Andrew S. Tanenbaum

Let me know in case something is missing or wrong.

GIST: General Internet Signalling Transport

A New Internet-Draft is available from the on-line Internet-Drafts directories. This draft is a work item of the Next Steps in Signaling Working Group of the IETF.

Title : GIST: General Internet Signalling Transport
Author(s) : H. Schulzrinne, M. Stiemerling
Filename : draft-ietf-nsis-ntlp-20.txt
Pages : 156
Date : 2009-06-03

This document specifies protocol stacks for the routing and transport of per-flow signalling messages along the path taken by that flow through the network. The design uses existing transport and security protocols under a common messaging layer, the General Internet Signalling Transport (GIST), which provides a common service for diverse signalling applications. GIST does not handle signalling application state itself, but manages its own internal state and the configuration of the underlying transport and security protocols to enable the transfer of messages in both directions along the flow path. The combination of GIST and the lower layer transport and security protocols provides a solution for the base protocol component of the "Next Steps in Signalling" framework.

A URL for this Internet-Draft is:
http://www.ietf.org/internet-drafts/draft-ietf-nsis-ntlp-20.txt

Internet-Drafts are also available by anonymous FTP at:
ftp://ftp.ietf.org/internet-drafts/

Tuesday, June 2, 2009

Invitation to attend: First International Workshop on Quality of Multimedia Experience- San Diego July 29-31 2009

On behalf of the organizing committee of the First International Workshop on Quality of Multimedia Experience
(QoMEX 2009), we would like to invite you to attend this exciting event that will happen in San Diego, July 29-31 2009.
QoMEX'09 features oral presentations, exhibits, panels and poster sessions in order to provide attendees with
various channels to exchange and acquire information about the latest developments and future trends in the
field of multimedia user experience.
Highlights from the Technical Program:
Plenary Talks
Speaker: Mr. Dave Blakely, Senior Director, IDEO
"Haptic Design Guidelines and Tools for the Next Generation of User Experience"
Speaker: Dr. Christophe Ramstein, Chief Technology Officer, Immersion Corporation
Speaker: Dr. Bruce Flinchbaugh, Texas Instruments Fellow and Director of the Video & Image Processing Laboratory, TI, Dallas
Speaker: Prof. Christine Fernandez-Maloigne, Professor of Signal and Image Processing in Poitiers University, France
More details about the talks:
Panel on "Quality of Experience: Tools, Targets and Trends"
Panel Chair: Prof. Fernando Pereira, Prof. ECE, Instituto Superior Técnico, Portugal
Panelists:
Prof. Alan Bovik, Prof. and Chair, ECE, UT Austin
Dr. Gary Sulivan, Video Architect, Microsoft and Chairman, ITU/VCEG
Prof. Sebastian Moeller, Deutsche Telekom Laboratories and Berlin University of Technology
Dr. Stephen Winkler, Principal Technologist, Symmetricom
More details about the panel
Technical papers will address the following major areas:
• User Experience Assessment and Enhancement
• Visual User Experience (Image/Video/Graphics)
• Auditory User Experience (Speech/Audio)
• Standardization Activities in Multimedia Quality Evaluation
See here for the list of papers
Registration
To register for the workshop please use this link
Further information is available at: http://www.qomex.org