Monday, August 31, 2015

Over-the-Top Content Delivery: State of the Art and Challenges Ahead at ICME 2015

As stated in my MPEG report from Warsaw I attended ICME'15 in Torino to give a tutorial -- together with Ali Begen -- about over-the-top content delivery. The slides are available as usual and embedded here...


If you have any questions or comments, please let us know. The goal of this tutorial is to give an overview about MPEG-DASH and also selected informative aspects (e.g., workflows, adaptation, quality, evaluation) not covered in the standard. However, it should not be seen as a tutorial on the standard as many approaches presented here can be also applied on other formats although MPEG-DASH seems to be the most promising from those available. During the tutorial we ran into interesting questions and discussions with the audience and I could also show some live demos from bitmovin using bitcodin and bitdash. Attendees were impressed about the maturity of the technology behind MPEG-DASH and how research results find their way into actual products available on the market.

If you're interested now, I'll give a similar tutorial -- with Tobias Hoßfeld -- about "Adaptive Media Streaming and Quality of Experience Evaluations using Crowdsourcing" during ITC27 (Sep 7, 2015, Ghent, Belgium) and bitmovin will be at IBC2015 in Amsterdam.


Friday, August 28, 2015

One Year of MPEG

In my last MPEG report (index) I’ve mentioned that the 112th MPEG meeting in Warsaw was my 50th MPEG meeting which roughly accumulates to one year of MPEG meetings. That is, one year of my life I've spend in MPEG meetings - scary, isn't it? Thus, I thought it’s time to recap what I have done in MPEG so far featuring the following topics/standards where I had significant contributions:
  • MPEG-21 - The Multimedia Framework 
  • MPEG-M - MPEG extensible middleware (MXM), later renamed to multimedia service platform technologies 
  • MPEG-V - Information exchange with Virtual Worlds, later renamed to media context and control
  • MPEG-DASH - Dynamic Adaptive Streaming over HTTP

MPEG-21 - The Multimedia Framework

I started my work with standards, specifically MPEG, with Part 7 of MPEG-21 referred to as Digital Item Adaptation (DIA) and developed the generic Bitstream Syntax Description (gBSD) in collaboration with SIEMENS which allows for a coding-format independent (generic) adaptation of scalable multimedia content towards the actual usage environment (e.g., different devices, resolution, bitrate). The main goal of DIA was to enable the Universal Media Access (UMA) -- any content, anytime, anywhere on any device -- and also motivated me to start this blog. I also wrote a series of blog entries on this topic: O Universal Multimedia Access, Where Art Thou? which gives an overview about this topic and basically is also what I’ve done in my Ph.D. thesis. Later I helped a lot in various MPEG-21 parts including its dissemination and documented where it has been used. In the past, I saw many forms of Digital Items (e.g., iTunesLP was one of the first) but unfortunately the need for a standardised format is very low. Instead, proprietary formats are used and I realised that developers are more into APIs than formats. The format comes with the API but it’s the availability of an API that attracts developers and makes them to adopt a certain technology. 

MPEG-M

The lessons learned from MPEG-21 was one reason why I joined the MPEG-M project as it was exactly the purpose to create an API into various MPEG technologies, providing developers a tool that makes it easy for them to adopt new technologies and, thus, new formats/standards. We created an entire architecture, APIs, and reference software to make it easy for external people to adopt MPEG technologies. The goal was to hide the complexity of the technology through simple to use APIs which should enable the accelerated development of components, solutions, and applications utilising digital media content. A good overview about MPEG-M can found on this poster.

MPEG-V

When MPEG started working on MPEG-V (it was not called like that in the beginning), I saw it as an extension of UMA and MPEG-21 DIA to go beyond audio-visual experiences by stimulating potentially all human senses. We created and standardised an XML-based language that enables the annotation of multimedia content with sensory effects. Later the scope was extended to include virtual worlds which resulted in the acronym MPEG-V. It also brought me to start working on Quality of Experience (QoE) and we coined the term Quality of Sensory Experience (QuASE) as part of the (virtual) SELab at Alpen-Adria-Universität Klagenfurt which offers a rich set of open-source software tools and datasets around this topic on top of off-the-shelf hardware (still in use in my office).

MPEG-DASH

The latest project I’m working on is MPEG-DASH where I’ve also co-founded bitmovin, now a successful startup offering fastest transcoding in the cloud (bitcodin) and high quality MPEG-DASH players (bitdash). It all started when MPEG asked me to chair the evaluation of call for proposals on HTTP streaming of MPEG media. We then created dash.itec.aau.at that offers a huge set of open source tools and datasets used by both academia and industry worldwide (e.g., listed on DASH-IF). I think I can proudly state that this is the most successful MPEG activity I've been involved so far... (note: a live deployment can be found here which shows 24/7 music videos over the Internet using bitcodin and bitdash).

DASH and QuASE are also part of my habilitation which brought me into the current position at Alpen-Adria-Universität Klagenfurt as Associate Professor. Finally, one might ask the question, was it all worth spending so much time for MPEG and at MPEG meetings. I would say YES and there are many reasons which could easily results in another blog post (or more) but it’s better to discuss this face to face, I'm sure there will be plenty of possibilities in the (near) future or you come to Klagenfurt, e.g., for ACM MMSys 2016 ...




Tuesday, July 28, 2015

MPEG news: a report from the 112th meeting, Warsaw, Poland



This blog post is also available at at bitmovin tech blog and SIGMM records.

The 112th MPEG meeting in Warsaw, Poland was a special meeting for me. It was my 50th MPEG meeting which roughly accumulates to one year of MPEG meetings (i.e., one year of my life I've spend in MPEG meetings incl. traveling - scary, isn't it? ... more on this in another blog post). But what happened at this 112th MPEG meeting (my 50th meeting)...

  • Requirements: CDVA, Future of Video Coding Standardization (no acronym yet), Genome compression
  • Systems: M2TS (ISO/IEC 13818-1:2015), DASH 3rd edition, Media Orchestration (no acronym yet), TRUFFLE
  • Video/JCT-VC/JCT-3D: MPEG-4 AVC, Future Video Coding, HDR, SCC
  • Audio: 3D audio
  • 3DG: PCC, MIoT, Wearable
MPEG Friday Plenary. Photo (c) Christian Timmerer.
As usual, the official press release and other publicly available documents can be found here. Let's dig into the different subgroups:
Requirements

In requirements experts were working on the Call for Proposals (CfP) for Compact Descriptors for Video Analysis (CDVA) including an evaluation framework. The evaluation framework includes 800-1000 objects (large objects like building facades, landmarks, etc.; small(er) objects like paintings, books, statues, etc.; scenes like interior scenes, natural scenes, multi-camera shots) and the evaluation of the responses should be conducted for the 114th meeting in San Diego.

The future of video coding standardization is currently happening in MPEG and shaping the way for the successor of of the HEVC standard. The current goal is providing (native) support for scalability (more than two spatial resolutions) and 30% compression gain for some applications (requiring a limited increase in decoder complexity) but actually preferred is 50% compression gain (at a significant increase of the encoder complexity). MPEG will hold a workshop at the next meeting in Geneva discussing specific compression techniques, objective (HDR) video quality metrics, and compression technologies for specific applications (e.g., multiple-stream representations, energy-saving encoders/decoders, games, drones). The current goal is having the International Standard for this new video coding standard around 2020.

MPEG has recently started a new project referred to as Genome Compression which is about of course about the compression of genome information. A big dataset has been collected and experts working on the Call for Evidence (CfE). The plan is holding a workshop at the next MPEG meeting in Geneva regarding prospect of Genome Compression and Storage Standardization targeting users, manufactures, service providers, technologists, etc.

Systems


Summer in Warsaw. Photo (c) Christian Timmerer.
The 5th edition of the MPEG-2 Systems standard has been published as ISO/IEC 13818-1:2015 on the 1st of July 2015 and is a consolidation of the 4th edition + Amendments 1-5.

In terms of MPEG-DASH, the draft text of ISO/IEC 23009-1 3rd edition comprising 2nd edition + COR 1 + AMD 1 + AMD 2 + AMD 3 + COR 2 is available for committee internal review. The expected publication date is scheduled for, most likely, 2016. Currently, MPEG-DASH includes a lot of activity in the following areas: spatial relationship description, generalized URL parameters, authentication, access control, multiple MPDs, full duplex protocols (aka HTTP/2 etc.), advanced and generalized HTTP feedback information, and various core experiments:
  • SAND (Sever and Network Assisted DASH)
  • FDH (Full Duplex DASH)
  • SAP-Independent Segment Signaling (SISSI)
  • URI Signing for DASH
  • Content Aggregation and Playback COntrol (CAPCO)
In particular, the core experiment process is very open as most work is conducted during the Ad hoc Group (AhG) period which is discussed on the publicly available MPEG-DASH reflector.

MPEG systems recently started an activity that is related to media orchestration which applies to capture as well as consumption and concerns scenarios with multiple sensors as well as multiple rendering devices, including one-to-many and many-to-one scenarios resulting in a worthwhile, customized experience.

Finally, the systems subgroup started an exploration activity regarding real-time streaming of file (a.k.a TRUFFLE) which should perform an gap analysis leading to extensions of the MPEG Media Transport (MMT) standard. However, some experts within MPEG concluded that most/all use cases identified within this activity could be actually solved with existing technology such as DASH. Thus, this activity may still need some discussions...

Video/JCT-VC/JCT-3D

The MPEG video subgroup is working towards a new amendment for the MPEG-4 AVC standard covering resolutions up to 8K and higher frame rates for lower resolution. Interestingly, although MPEG most of the time is ahead of industry, 8K and high frame rate is already supported in browser environments (e.g., using bitdash 8K, HFR) and modern encoding platforms like bitcodin. However, it's good that we finally have means for an interoperable signaling of this profile.

In terms of future video coding standardization, the video subgroup released a call for test material. Two sets of test sequences are already available and will be investigated regarding compression until next meeting.

After a successful call for evidence for High Dynamic Range (HDR), the technical work starts in the video subgroup with the goal to develop an architecture ("H2M") as well as three core experiments (optimization without HEVC specification change, alternative reconstruction approaches, objective metrics).

The main topic of the JCT-VC was screen content coding (SCC) which came up with new coding tools that are better compressing content that is (fully or partially) computer generated leading to a significant improvement of compression, approx. or larger than 50% rate reduction for specific screen content.

Audio

The audio subgroup is mainly concentrating on 3D audio where they identified the need for intermediate bitrates between 3D audio phase 1 and 2. Currently, phase 1 identified 256, 512, 1200 kb/s whereas phase 2 focuses on 128, 96, 64, 48 kb/s. The broadcasting industry needs intermediate bitrates and, thus, phase 2 is extended to bitrates between 128 and 256 kb/s.

3DG

MPEG 3DG is working on point cloud compression (PCC) for which open source software has been identified. Additionally, there're new activity in the area of Media Internet of Things (MIoT) and wearable computing (like glasses and watches) that could lead to new standards developed within MPEG. Therefore, stay tuned on these topics as they may shape your future.

The week after the MPEG meeting I met the MPEG convenor and the JPEG convenor again during ICME2015 in Torino but that's another story...
L. Chiariglione, H. Hellwagner, T. Ebrahimi, C. Timmerer (from left to right) during ICME2015. Photo (c) T. Ebrahimi.



Thursday, March 19, 2015

MMSys 2016 - Preliminary Call for Papers


ACM Multimedia Systems 2016 (MMSys'16) [PDF]
co-located with NOSSDAV, MoVid, and MMVE

May 10-13, 2016
Klagenfurt am Wörthersee, Austria

The ACM Multimedia Systems Conference (MMSys) provides a forum for researchers to present and share their latest research findings in multimedia systems. While research about specific aspects of multimedia systems are regularly published in the various proceedings and transactions of the networking, operating system, real-time system, and database communities, MMSys aims to cut across these domains in the context of multimedia data types. This provides a unique opportunity to view the intersections and the inter-play of the various approaches and solutions developed across these domains to deal with multimedia data types. MMSys is a venue for researchers who explore:
  • Complete multimedia systems that provide a new kind of multimedia experience or systems whose overall performance improves the state-of-the-art through new research results in one of more components, or
  • Enhancements to one or more system components that provide a documented improvement over the state-of-the-art for handling continuous media or time-dependent services.
Such individual system components include:
  • Operating systems
  • Distributed architectures and protocol enhancements
  • Domain languages, development tools and abstraction layers
  • Using new architectures or computing resources for multimedia
  • New or improved I/O architectures or I/O devices, innovative uses and algorithms for their operation
  • Representation of continuous or time-dependent media
  • Metrics, measures and measurement tools to assess performance and quality of service/experience
This touches aspects of many hot topics: adaptive streaming, games, virtual environments, augmented reality, 3D video, immersive systems, telepresence, multi- and many-core, GPGPUs, mobile streaming, P2P, Clouds, cyber-physical systems. All submissions will be peer-reviewed by at least 3 members of the technical program committee. Full papers will be evaluated for their scientific quality. Accepted papers must reach a high scientific standard and document unpublished research.

Committee ACM MMSys
  • General chair: Christian Timmerer, AAU
  • TPC chair: Ali C. Begen, CISCO
  • Dataset chair: Karel Fliegel, CTU
  • Demo chairs: Omar Niamut, TNO & Michael Zink, UMass
  • Proceedings chair: Benjamin Rainer, AAU
  • Publicity chairs
    • America: Baochun Li, University of Toronto
    • Asia: Sheng-Wei Chen (a.k.a. Kuan-Ta Chen), Academia Sinica
    • Middle East: Mohamed Hefeeda, Qatar Computing Research Institute (QCRI)
    • Europe: Vincent Charvillat, IRIT-ENSEEIHT-Toulouse Univ.
  • Local chair: Laszlo Böszörmenyi, AAU
Important dates ACM MMSys
  • Submission deadline: November 27, 2015
  • Reviews available to authors: January 15, 2016
  • Rebuttal deadline:  January 22, 2016
  • Acceptance notification: January 29, 2015
  • Camera ready deadline: March 11, 2016
Committee ACM NOSSDAV (co-located with MMSys) [Prelim. CfP: PDF]
  • General chair: Hermann Hellwagner, AAU
  • TPC chair: Eckehard Steinbach, TUM
Important dates ACM NOSSDAV
  • Submission deadline: February 5, 2016
  • Acceptance notification: March 23, 2016
  • Camera ready deadline: April 8, 2016
Committees ACM MMVE (co-located with MMSys) [Prelim. CfP: PDF]
  • General chair: Jean Botev, Univ. of Luxembourg
Important dates ACM MMVE
  • Submission deadline: February 5, 2016
  • Acceptance notification: March 23, 2016
  • Camera ready deadline: April 8, 2016
Committees ACM MoVid (co-located with MMSys)
  • TPC chair: Pål Halvorsen, Simula/Univ. Oslo
  • TPC co-chair: Qi Han, Colorado School of Mines
Important dates ACM MoVid
  • Submission deadline: tbd
  • Acceptance notification: tbd
  • Camera ready deadline: tbd
Local organisation
  • Chair: Laszlo Böszörmenyi
  • Alpen-Adria-Universität Klagenfurt (AAU)
  • Institute of Information Technology (ITEC)
  • Universitätsstraße 65-67, A-9020 Klagenfurt
  • Email: mmsys2016@itec.aau.at

Wednesday, March 18, 2015

MPEG news: a report from the 111th meeting, Geneva, Switzerland

MPEG111 opening plenary.
This blog post is also available at SIGMM records.

The 111th MPEG meeting (note: link includes press release and all publicly available output documents) was held in Geneva, Switzerland showing up some interesting aspects which I’d like to highlight here. Undoubtedly, it was the shortest meeting I’ve ever attended (and my first meeting was #61) as final plenary concluded at 2015/02/20T18:18!

In terms of the requirements (subgroup) it’s worth to mention the call for evidence (CfE) for high-dynamic range (HDR) and wide color gamut (WCG) video coding which comprises a first milestone towards a new video coding format. The purpose of this CfE is to explore whether or not (a) the coding efficiency and/or (b) the functionality of the HEVC Main 10 and Scalable Main 10 profiles can be significantly improved for HDR and WCG content. In addition to that requirements issues a draft call for evidence on free viewpoint TV. Both documents are publicly available here.

The video subgroup continued discussions related to the future of video coding standardisation and issued a public document requesting contributions on “future video compression technology”. Interesting application requirements come from over-the-top streaming use cases which request HDR and WCG as well as video over cellular networks. Well, at least the former is something to be covered by the CfE mentioned above. Furthermore, features like scalability and perceptual quality is something that should be considered from ground-up and not (only) as an extension. Yes, scalability is something that really helps a lot in OTT streaming starting from easier content management, cache-efficient delivery, and it allows for a more aggressive buffer modelling and, thus, adaptation logic within the client enabling better Quality of Experience (QoE) for the end user. It seems like complexity (at the encoder) is not such much a concern as long as it scales with cloud deployments such as http://www.bitcodin.com/ (e.g., the bitdash demo area shows some neat 4K/8K/HFR DASH demos which have been encoded with bitcodin). Closely related to 8K, there’s a new AVC amendment coming up covering 8K although one can do it already today (see before) but it’s good to have standards support for this. For HEVC, the JCT-3D/VC issued the FDAM4 for 3D Video Extensions and started with PDAM5 for Screen Content Coding Extensions (both documents being publicly available after an editing period of about a month).

And what about audio, the audio subgroup has decided that ISO/IEC DIS 23008-3 3D Audio shall be promoted directly to IS which means that the DIS was already at such a good state that only editorial comments are applied which actually saves a balloting cycle. We have to congratulate the audio subgroup for this remarkable milestone.

Finally, I’d like to discuss a few topics related to DASH which is progressing towards its 3rd edition which will incorporate amendment 2 (Spatial Relationship Description, Generalized URL parameters and other extensions), amendment 3 (Authentication, Access Control and multiple MPDs), and everything else that will be incorporated within this year, like some aspects documented in the technologies under consideration or currently being discussed within the core experiments (CE).
Currently, MPEG-DASH conducts 5 core experiments:
  • Server and Network Assisted DASH (SAND)
  • DASH over Full Duplex HTTP-based Protocols (FDH)
  • URI Signing for DASH (CE-USD)
  • SAP-Independent Segment SIgnaling (SISSI)
  • Content aggregation and playback control (CAPCO)
The description of core experiments is publicly available and, compared to the previous meeting, we have a new CE which is about content aggregation and playback control (CAPCO) which "explores solutions for aggregation of DASH content from multiple live and on-demand origin servers, addressing applications such as creating customized on-demand and live programs/channels from multiple origin servers per client, targeted preroll ad insertion in live programs and also limiting playback by client such as no-skip or no fast forward.” This process is quite open and anybody can join by subscribing to the email reflector.

The CE for DASH over Full Duplex HTTP-based Protocols (FDH) is becoming major and basically defines the usage of DASH for push-features of WebSockets and HTTP/2. At this meeting MPEG issues a working draft and also the CE on Server and Network Assisted DASH (SAND) got its own part 5 where it goes to CD but documents are not publicly available. However, I'm pretty sure I can report more on this next time, so stay tuned or feel free to comment here.

Friday, February 27, 2015

IEEE JSAC Special Issue: Video Distribution over Future Internet


Special issue on Video Distribution over Future Internet 

Extended Submission Deadline: May 1529, 2015


The current Internet is under tremendous pressure due to the exponential growth in bandwidth demand, fueled by the transfer of video consumption to online distribution, IPTV, streaming services such as Netflix, and from phone networks to videoconferencing and Skype-like video communications. The Internet has also democratized the creation, distribution and sharing of user-generated video contents through services such as YouTube, Vimeo or Hulu. The situation is further aggravated by the emerging trends of adopting higher definition video streams, requesting more and more bandwidth. Indeed, the Cisco Visual Networking Index (VNI) projects that video consumption will amount to 90% of the global consumer traffic by 2017. Another shift predicted by Cisco VNI is that most data communications will be wireless by 2018.

To cope with the bandwidth growth, the shift to wireless, and to solve other related issues (e.g., naming, security, etc) with the current Internet, new architectures for the future Internet have been proposed and prototyped. Examples include Content-Centric Networks (CCN) or Named Data Networking (NDN), or some content-based extensions to Software-Defined Networking (SDN), among others. None of these emerging architectures deals specifically with video distribution, as they need to support a wider range of services, but all would have to support videos in an efficient manner. Therefore, the study of video distribution over the future Internet is of primary importance: how well does future Internet architecture facilitate video delivery? What kind of video distribution mechanisms need to be created to run on the future Internet? How will video be supported in the wireless portion of the future Internet? Can the current video distribution mechanisms (such as end-to-end dynamic rate adaptation schemes) be used or even enhanced for the future Internet? What are subjective/objective metrics for performance measurement? How to provide real-time guarantees for live and interactive video streams?

While the topic is quite wide, we will narrow the focus of this special issue on the fundamental problems of video distribution and delivery in the future Internet. We invite submissions of high-quality original technical and survey papers, which have not been published previously, on video distribution in the future Internet, including the following non-exhaustive list of topics. Please note that all topics must be understood in the context of the future Internet as outlined above.
  • Network-assisted video distribution, network support for multimedia, specifically supporting wireless environments
  • New information-centric and software-defined architectures to support wired and wireless video streaming
  • Resource allocation for wired and wireless video distribution
  • Media streaming, distribution, and storage support in the future Internet
  • In-network caching/storage, named data retrieval, publish/subscribe for video distribution in wired and wireless networks
  • Next generation Content Delivery Networks (CDN)
  • Adaptive streaming and rate adaptation for video streaming in the future Internet for wired and wireless networks
  • Peer-to-peer aspects of video multimedia distribution, including scaling and capacity
  • QoS/QoE measurement and support for video distribution in the future Internet
  • User-generated content and social networks for multi-media
  • Video compression techniques explicitly supporting the future Internet
  • Big-Data mechanisms (say referral engines or content placement algorithms) for video content over future Internet
  • Social-aware video content distribution over future Internet
  • Integration of video distribution and multimedia computing over future Internet
  • Testbeds and measurements of video distribution over future Internet
  • Cost and economic models for video distribution over future Internet
  • Theoretical foundations for video distribution over future Internet, e.g., network coding, information theory, machine learning, etc
Special Issue Editors
  • Prof. Cedric Westphal, Huawei Innovations & UCSC, USA 
  • Prof. Tommaso Melodia, Northeastern University, Boston, MA, USA 
  • Prof. Christian Timmerer, Alpen-Adria-Universität Klagenfurt, Austria
  • Prof. Wenwu Zhu, Tsinghua University, Beijing, China
Important Dates
  • Paper Submission due: 05/29/2015
  • First review complete: 09/15/2015
  • Acceptance Notification: 11/15/2015
  • Camera-ready version: 12/15/2015
  • Publication date: Second Quarter 2016 
Manuscript submissions and reviewing process: All submissions must be original work that has not been published or submitted elsewhere. For submission format, please follow IEEE JSAC guidelines (http://www.comsoc.org/jsac/paper-submission-guidelines). Each paper will go through a two-round rigorous reviewing process by at least three leading experts in related areas. Papers should be submitted through EDAS (https://edas.info/newPaper.php?c=19291).

ICME 2015: Over-the-Top Content Delivery: State of the Art and Challenges Ahead

Supported by http://www.dash-player.com/
Tutorial at ICME 2015
June 29 - July 3, 2015
Torino, Italy

Abstract: Over-the-top content delivery is becoming increasingly attractive for both live and on-demand content thanks to the popularity of platforms like YouTube, Vimeo, Netflix, Hulu, Maxdome, etc. In this tutorial, we present state of the art and challenges ahead in over-the-top content delivery. In particular, the goal of this tutorial is to provide an overview of adaptive media delivery, specifically in the context of HTTP adaptive streaming (HAS) including the recently ratified MPEG-DASH standard. The main focus of the tutorial will be on the common problems in HAS deployments such as client design, QoE optimization, multi-screen and hybrid delivery scenarios, and synchronization issues. For each problem, we will examine proposed solutions along with their pros and cons. In the last part of the tutorial, we will look into the open issues and review the work-in-progress and future research directions.

The tutorial will be held on June 29, 2015 in the afternoon.

Slides will be provided on time and a preliminary version (from previous presentations) can be found here and here.

Biography of Presenters

Christian Timmerer received his M.Sc. (Dipl.-Ing.) in January 2003 and his Ph.D. (Dr.techn.) in June 2006 (for research on the adaptation of scalable multimedia content in streaming and constraint environments) both from the Alpen-Adria-Universität Klagenfurt. He is currently an Associate Professor at the Institute of Information Technology (ITEC) within the Multimedia Communication Group. His research interests include immersive multimedia communication, streaming, adaptation, Quality of Experience, and Sensory Experience.

He has published more than 150 papers in these areas and he has organized a number of special sessions and issues in this domain, e.g., “Special Session on MMT/DASH” (MMsys 2011, followed by a special issue in Signal Processing: Image Communication, 2012), “Special Issue on Adaptive Media Streaming” (IEEE JSAC, published 2014). Furthermore, he was the general chair of WIAMIS 2008, QoMEX 2013, and QCMan 2014; will be general chair of ACM Multimedia Systems 2016. He is an editorial board member of IEEE Computer, associate editor for IEEE Transactions on Multimedia, area editor for the Elsevier journal on Signal Processing: Image Communication and a key member of the Interest Groups (IG) on Image and Video Coding as well as Quality of Experience and Director of the Review Board of the IEEE Multimedia Communication Technical Committee. Finally, he writes a regular column for ACM SIGMM Records where he serves as an editor and he is a member of the ACM SIGMM Open Source Software Committee. Dr. Timmerer participated in the work of ISO/MPEG for more then 10 years, notably as the head of the Austrian delegation, coordinator of several core experiments, co-chair of several ad- hoc groups, and as an editor for various standards, notably the MPEG-21 Multimedia Framework and the MPEG Extensible Middleware (MXM which became MPEG-M). His current contributions are in the area of MPEG-V (Media Context and Control) and Dynamic Adaptive Streaming over HTTP (DASH), for which he also serves as an editor. He received various ISO/IEC certificates of appreciation.


Ali C. Begen is with the Video and Content Platforms Research and Advanced Development Group at Cisco. His interests include networked entertainment, Internet multimedia, transport protocols and content delivery. Ali is currently working on architectures and protocols for next-generation video transport and distribution over IP networks, and he is an active contributor in the IETF and MPEG in these areas. Ali holds a Ph.D. degree in electrical and computer engineering from Georgia Tech. He received the Best Student-paper Award at IEEE ICIP 2003, the Most-cited Paper Award from Elsevier Signal Processing: Image Communication in 2008, and the Best-paper Award at Packet Video Workshop 2012. Ali has been an editor for the Consumer Communications and Networking series in the IEEE Communications Magazine since 2011 and an associate editor for the IEEE Transactions on Multimedia since 2013. He is a senior member of the IEEE and a senior member of the ACM. Further information on Ali’s projects, publications and presentations can be found at http://ali.begen.net.