Friday, September 4, 2015


Ultra-high definition (UHD) displays are available for quite some time and in terms of video coding the MPEG-HEVC/H.265 standard was designed to support these high resolutions in an efficient way. And it does, with a performance gain of more than twice as much as its predecessor MPEG-AVC/H.264. But it all comes with costs - not only in terms of coding complexity at both encoder and decoder - especially when it comes to licensing. The MPEG-AVC/H.264 licenses are managed by MPEG LA but for HEVC/H.265 there are two patent pools available which makes its industry adoption more difficult than it was for AVC.

HEVC was published by ISO in early 2015 and in the meantime MPEG started discussing about future video coding using its usual approach of open workshops inviting experts from companies inside and outside of MPEG. However, now there’s the Alliance for Open Media (AOMedia) promising to provide "open, royalty-free and interoperable solutions for the next generation of video delivery” (press release). A good overview and summary is available here which even mentions that a third HEVC patent pool is shaping up (OMG!).

Anyway, even if AOMedia’s "media codecs, media formats, and related technologies” are free like in “free beer” it’s still not clear whether it will taste anything good. Also, many big players are not part of this alliance and could (easily) come up with some patent claims at a later stage jeopardising the whole process (cf. what happened with VP9). In any case, AOMedia is certainly disruptive and together with other disruptive media technologies (e.g., PERSEUS although I have some doubts here) might change the media coding landscape, not clear whether it will be a turn to the better though...

Finally, I was wondering how does this all impact DASH, specifically as MPEG LA recently announced that they want to establish a patent pool for DASH although major players have stated some time ago not to charge anything for DASH (wrt licensing). In terms of media codecs please note that DASH is codec agnostic and it can work with any codec, also those not specified within MPEG and we’ve shown it works some time ago already (using WebM). The main problem is, however, which codecs are supported on which end user devices and how to access them with which API (like HMTL5 & MSE). For example, some Android devices support HEVC but not through HMTL5 & MSE which makes it more difficult to integrate with DASH.

Using MPEG-DASH with HMTL5 & MSE is currently the preferred way how to deploy DASH, even the DASH-IF’s reference player (dash.js) is assuming HTML5 & MSE and companies like bitmovin are offering bitdash following the same principles. Integrating new codecs on the DASH encoding side like on bitmovin’s bitcodin cloud-based transcoding-as-a-service isn’t a big deal and can be done very quickly as soon as software implementations are available. Thus, the problem is more on the plethora of heterogeneous end user devices like smart phones, tablets, laptops, computers, set-top-boxes, TV sets, media gateways, gaming consoles, etc. and their variety of platforms and operating systems.

Therefore, I’m wondering whether AOMedia (or whatever will come in the future) is a real effort changing the media landscape to the better or just another competing standard to choose from … but on the other side, as Andrew S. Tanenbaum has written already in his book on computer networks, “the nice thing about standards is that you have so many to choose from.”

Monday, August 31, 2015

Over-the-Top Content Delivery: State of the Art and Challenges Ahead at ICME 2015

As stated in my MPEG report from Warsaw I attended ICME'15 in Torino to give a tutorial -- together with Ali Begen -- about over-the-top content delivery. The slides are available as usual and embedded here...

If you have any questions or comments, please let us know. The goal of this tutorial is to give an overview about MPEG-DASH and also selected informative aspects (e.g., workflows, adaptation, quality, evaluation) not covered in the standard. However, it should not be seen as a tutorial on the standard as many approaches presented here can be also applied on other formats although MPEG-DASH seems to be the most promising from those available. During the tutorial we ran into interesting questions and discussions with the audience and I could also show some live demos from bitmovin using bitcodin and bitdash. Attendees were impressed about the maturity of the technology behind MPEG-DASH and how research results find their way into actual products available on the market.

If you're interested now, I'll give a similar tutorial -- with Tobias Hoßfeld -- about "Adaptive Media Streaming and Quality of Experience Evaluations using Crowdsourcing" during ITC27 (Sep 7, 2015, Ghent, Belgium) and bitmovin will be at IBC2015 in Amsterdam.

Friday, August 28, 2015

One Year of MPEG

In my last MPEG report (index) I’ve mentioned that the 112th MPEG meeting in Warsaw was my 50th MPEG meeting which roughly accumulates to one year of MPEG meetings. That is, one year of my life I've spend in MPEG meetings - scary, isn't it? Thus, I thought it’s time to recap what I have done in MPEG so far featuring the following topics/standards where I had significant contributions:
  • MPEG-21 - The Multimedia Framework 
  • MPEG-M - MPEG extensible middleware (MXM), later renamed to multimedia service platform technologies 
  • MPEG-V - Information exchange with Virtual Worlds, later renamed to media context and control
  • MPEG-DASH - Dynamic Adaptive Streaming over HTTP

MPEG-21 - The Multimedia Framework

I started my work with standards, specifically MPEG, with Part 7 of MPEG-21 referred to as Digital Item Adaptation (DIA) and developed the generic Bitstream Syntax Description (gBSD) in collaboration with SIEMENS which allows for a coding-format independent (generic) adaptation of scalable multimedia content towards the actual usage environment (e.g., different devices, resolution, bitrate). The main goal of DIA was to enable the Universal Media Access (UMA) -- any content, anytime, anywhere on any device -- and also motivated me to start this blog. I also wrote a series of blog entries on this topic: O Universal Multimedia Access, Where Art Thou? which gives an overview about this topic and basically is also what I’ve done in my Ph.D. thesis. Later I helped a lot in various MPEG-21 parts including its dissemination and documented where it has been used. In the past, I saw many forms of Digital Items (e.g., iTunesLP was one of the first) but unfortunately the need for a standardised format is very low. Instead, proprietary formats are used and I realised that developers are more into APIs than formats. The format comes with the API but it’s the availability of an API that attracts developers and makes them to adopt a certain technology. 


The lessons learned from MPEG-21 was one reason why I joined the MPEG-M project as it was exactly the purpose to create an API into various MPEG technologies, providing developers a tool that makes it easy for them to adopt new technologies and, thus, new formats/standards. We created an entire architecture, APIs, and reference software to make it easy for external people to adopt MPEG technologies. The goal was to hide the complexity of the technology through simple to use APIs which should enable the accelerated development of components, solutions, and applications utilising digital media content. A good overview about MPEG-M can found on this poster.


When MPEG started working on MPEG-V (it was not called like that in the beginning), I saw it as an extension of UMA and MPEG-21 DIA to go beyond audio-visual experiences by stimulating potentially all human senses. We created and standardised an XML-based language that enables the annotation of multimedia content with sensory effects. Later the scope was extended to include virtual worlds which resulted in the acronym MPEG-V. It also brought me to start working on Quality of Experience (QoE) and we coined the term Quality of Sensory Experience (QuASE) as part of the (virtual) SELab at Alpen-Adria-Universität Klagenfurt which offers a rich set of open-source software tools and datasets around this topic on top of off-the-shelf hardware (still in use in my office).


The latest project I’m working on is MPEG-DASH where I’ve also co-founded bitmovin, now a successful startup offering fastest transcoding in the cloud (bitcodin) and high quality MPEG-DASH players (bitdash). It all started when MPEG asked me to chair the evaluation of call for proposals on HTTP streaming of MPEG media. We then created that offers a huge set of open source tools and datasets used by both academia and industry worldwide (e.g., listed on DASH-IF). I think I can proudly state that this is the most successful MPEG activity I've been involved so far... (note: a live deployment can be found here which shows 24/7 music videos over the Internet using bitcodin and bitdash).

DASH and QuASE are also part of my habilitation which brought me into the current position at Alpen-Adria-Universität Klagenfurt as Associate Professor. Finally, one might ask the question, was it all worth spending so much time for MPEG and at MPEG meetings. I would say YES and there are many reasons which could easily results in another blog post (or more) but it’s better to discuss this face to face, I'm sure there will be plenty of possibilities in the (near) future or you come to Klagenfurt, e.g., for ACM MMSys 2016 ...

Tuesday, July 28, 2015

MPEG news: a report from the 112th meeting, Warsaw, Poland

This blog post is also available at at bitmovin tech blog and SIGMM records.

The 112th MPEG meeting in Warsaw, Poland was a special meeting for me. It was my 50th MPEG meeting which roughly accumulates to one year of MPEG meetings (i.e., one year of my life I've spend in MPEG meetings incl. traveling - scary, isn't it? ... more on this in another blog post). But what happened at this 112th MPEG meeting (my 50th meeting)...

  • Requirements: CDVA, Future of Video Coding Standardization (no acronym yet), Genome compression
  • Systems: M2TS (ISO/IEC 13818-1:2015), DASH 3rd edition, Media Orchestration (no acronym yet), TRUFFLE
  • Video/JCT-VC/JCT-3D: MPEG-4 AVC, Future Video Coding, HDR, SCC
  • Audio: 3D audio
  • 3DG: PCC, MIoT, Wearable
MPEG Friday Plenary. Photo (c) Christian Timmerer.
As usual, the official press release and other publicly available documents can be found here. Let's dig into the different subgroups:

In requirements experts were working on the Call for Proposals (CfP) for Compact Descriptors for Video Analysis (CDVA) including an evaluation framework. The evaluation framework includes 800-1000 objects (large objects like building facades, landmarks, etc.; small(er) objects like paintings, books, statues, etc.; scenes like interior scenes, natural scenes, multi-camera shots) and the evaluation of the responses should be conducted for the 114th meeting in San Diego.

The future of video coding standardization is currently happening in MPEG and shaping the way for the successor of of the HEVC standard. The current goal is providing (native) support for scalability (more than two spatial resolutions) and 30% compression gain for some applications (requiring a limited increase in decoder complexity) but actually preferred is 50% compression gain (at a significant increase of the encoder complexity). MPEG will hold a workshop at the next meeting in Geneva discussing specific compression techniques, objective (HDR) video quality metrics, and compression technologies for specific applications (e.g., multiple-stream representations, energy-saving encoders/decoders, games, drones). The current goal is having the International Standard for this new video coding standard around 2020.

MPEG has recently started a new project referred to as Genome Compression which is about of course about the compression of genome information. A big dataset has been collected and experts working on the Call for Evidence (CfE). The plan is holding a workshop at the next MPEG meeting in Geneva regarding prospect of Genome Compression and Storage Standardization targeting users, manufactures, service providers, technologists, etc.


Summer in Warsaw. Photo (c) Christian Timmerer.
The 5th edition of the MPEG-2 Systems standard has been published as ISO/IEC 13818-1:2015 on the 1st of July 2015 and is a consolidation of the 4th edition + Amendments 1-5.

In terms of MPEG-DASH, the draft text of ISO/IEC 23009-1 3rd edition comprising 2nd edition + COR 1 + AMD 1 + AMD 2 + AMD 3 + COR 2 is available for committee internal review. The expected publication date is scheduled for, most likely, 2016. Currently, MPEG-DASH includes a lot of activity in the following areas: spatial relationship description, generalized URL parameters, authentication, access control, multiple MPDs, full duplex protocols (aka HTTP/2 etc.), advanced and generalized HTTP feedback information, and various core experiments:
  • SAND (Sever and Network Assisted DASH)
  • FDH (Full Duplex DASH)
  • SAP-Independent Segment Signaling (SISSI)
  • URI Signing for DASH
  • Content Aggregation and Playback COntrol (CAPCO)
In particular, the core experiment process is very open as most work is conducted during the Ad hoc Group (AhG) period which is discussed on the publicly available MPEG-DASH reflector.

MPEG systems recently started an activity that is related to media orchestration which applies to capture as well as consumption and concerns scenarios with multiple sensors as well as multiple rendering devices, including one-to-many and many-to-one scenarios resulting in a worthwhile, customized experience.

Finally, the systems subgroup started an exploration activity regarding real-time streaming of file (a.k.a TRUFFLE) which should perform an gap analysis leading to extensions of the MPEG Media Transport (MMT) standard. However, some experts within MPEG concluded that most/all use cases identified within this activity could be actually solved with existing technology such as DASH. Thus, this activity may still need some discussions...


The MPEG video subgroup is working towards a new amendment for the MPEG-4 AVC standard covering resolutions up to 8K and higher frame rates for lower resolution. Interestingly, although MPEG most of the time is ahead of industry, 8K and high frame rate is already supported in browser environments (e.g., using bitdash 8K, HFR) and modern encoding platforms like bitcodin. However, it's good that we finally have means for an interoperable signaling of this profile.

In terms of future video coding standardization, the video subgroup released a call for test material. Two sets of test sequences are already available and will be investigated regarding compression until next meeting.

After a successful call for evidence for High Dynamic Range (HDR), the technical work starts in the video subgroup with the goal to develop an architecture ("H2M") as well as three core experiments (optimization without HEVC specification change, alternative reconstruction approaches, objective metrics).

The main topic of the JCT-VC was screen content coding (SCC) which came up with new coding tools that are better compressing content that is (fully or partially) computer generated leading to a significant improvement of compression, approx. or larger than 50% rate reduction for specific screen content.


The audio subgroup is mainly concentrating on 3D audio where they identified the need for intermediate bitrates between 3D audio phase 1 and 2. Currently, phase 1 identified 256, 512, 1200 kb/s whereas phase 2 focuses on 128, 96, 64, 48 kb/s. The broadcasting industry needs intermediate bitrates and, thus, phase 2 is extended to bitrates between 128 and 256 kb/s.


MPEG 3DG is working on point cloud compression (PCC) for which open source software has been identified. Additionally, there're new activity in the area of Media Internet of Things (MIoT) and wearable computing (like glasses and watches) that could lead to new standards developed within MPEG. Therefore, stay tuned on these topics as they may shape your future.

The week after the MPEG meeting I met the MPEG convenor and the JPEG convenor again during ICME2015 in Torino but that's another story...
L. Chiariglione, H. Hellwagner, T. Ebrahimi, C. Timmerer (from left to right) during ICME2015. Photo (c) T. Ebrahimi.

Thursday, March 19, 2015

MMSys 2016 - Preliminary Call for Papers

ACM Multimedia Systems 2016 (MMSys'16) [PDF]
co-located with NOSSDAV, MoVid, and MMVE

May 10-13, 2016
Klagenfurt am Wörthersee, Austria

The ACM Multimedia Systems Conference (MMSys) provides a forum for researchers to present and share their latest research findings in multimedia systems. While research about specific aspects of multimedia systems are regularly published in the various proceedings and transactions of the networking, operating system, real-time system, and database communities, MMSys aims to cut across these domains in the context of multimedia data types. This provides a unique opportunity to view the intersections and the inter-play of the various approaches and solutions developed across these domains to deal with multimedia data types. MMSys is a venue for researchers who explore:
  • Complete multimedia systems that provide a new kind of multimedia experience or systems whose overall performance improves the state-of-the-art through new research results in one of more components, or
  • Enhancements to one or more system components that provide a documented improvement over the state-of-the-art for handling continuous media or time-dependent services.
Such individual system components include:
  • Operating systems
  • Distributed architectures and protocol enhancements
  • Domain languages, development tools and abstraction layers
  • Using new architectures or computing resources for multimedia
  • New or improved I/O architectures or I/O devices, innovative uses and algorithms for their operation
  • Representation of continuous or time-dependent media
  • Metrics, measures and measurement tools to assess performance and quality of service/experience
This touches aspects of many hot topics: adaptive streaming, games, virtual environments, augmented reality, 3D video, immersive systems, telepresence, multi- and many-core, GPGPUs, mobile streaming, P2P, Clouds, cyber-physical systems. All submissions will be peer-reviewed by at least 3 members of the technical program committee. Full papers will be evaluated for their scientific quality. Accepted papers must reach a high scientific standard and document unpublished research.

Committee ACM MMSys
  • General chair: Christian Timmerer, AAU
  • TPC chair: Ali C. Begen, CISCO
  • Dataset chair: Karel Fliegel, CTU
  • Demo chairs: Omar Niamut, TNO & Michael Zink, UMass
  • Proceedings chair: Benjamin Rainer, AAU
  • Publicity chairs
    • America: Baochun Li, University of Toronto
    • Asia: Sheng-Wei Chen (a.k.a. Kuan-Ta Chen), Academia Sinica
    • Middle East: Mohamed Hefeeda, Qatar Computing Research Institute (QCRI)
    • Europe: Vincent Charvillat, IRIT-ENSEEIHT-Toulouse Univ.
  • Local chair: Laszlo Böszörmenyi, AAU
Important dates ACM MMSys
  • Submission deadline: November 27, 2015
  • Reviews available to authors: January 15, 2016
  • Rebuttal deadline:  January 22, 2016
  • Acceptance notification: January 29, 2015
  • Camera ready deadline: March 11, 2016
Committee ACM NOSSDAV (co-located with MMSys) [Prelim. CfP: PDF]
  • General chair: Hermann Hellwagner, AAU
  • TPC chair: Eckehard Steinbach, TUM
Important dates ACM NOSSDAV
  • Submission deadline: February 5, 2016
  • Acceptance notification: March 23, 2016
  • Camera ready deadline: April 8, 2016
Committees ACM MMVE (co-located with MMSys) [Prelim. CfP: PDF]
  • General chair: Jean Botev, Univ. of Luxembourg
Important dates ACM MMVE
  • Submission deadline: February 5, 2016
  • Acceptance notification: March 23, 2016
  • Camera ready deadline: April 8, 2016
Committees ACM MoVid (co-located with MMSys)
  • TPC chair: Pål Halvorsen, Simula/Univ. Oslo
  • TPC co-chair: Qi Han, Colorado School of Mines
Important dates ACM MoVid
  • Submission deadline: tbd
  • Acceptance notification: tbd
  • Camera ready deadline: tbd
Local organisation
  • Chair: Laszlo Böszörmenyi
  • Alpen-Adria-Universität Klagenfurt (AAU)
  • Institute of Information Technology (ITEC)
  • Universitätsstraße 65-67, A-9020 Klagenfurt
  • Email:

Wednesday, March 18, 2015

MPEG news: a report from the 111th meeting, Geneva, Switzerland

MPEG111 opening plenary.
This blog post is also available at SIGMM records.

The 111th MPEG meeting (note: link includes press release and all publicly available output documents) was held in Geneva, Switzerland showing up some interesting aspects which I’d like to highlight here. Undoubtedly, it was the shortest meeting I’ve ever attended (and my first meeting was #61) as final plenary concluded at 2015/02/20T18:18!

In terms of the requirements (subgroup) it’s worth to mention the call for evidence (CfE) for high-dynamic range (HDR) and wide color gamut (WCG) video coding which comprises a first milestone towards a new video coding format. The purpose of this CfE is to explore whether or not (a) the coding efficiency and/or (b) the functionality of the HEVC Main 10 and Scalable Main 10 profiles can be significantly improved for HDR and WCG content. In addition to that requirements issues a draft call for evidence on free viewpoint TV. Both documents are publicly available here.

The video subgroup continued discussions related to the future of video coding standardisation and issued a public document requesting contributions on “future video compression technology”. Interesting application requirements come from over-the-top streaming use cases which request HDR and WCG as well as video over cellular networks. Well, at least the former is something to be covered by the CfE mentioned above. Furthermore, features like scalability and perceptual quality is something that should be considered from ground-up and not (only) as an extension. Yes, scalability is something that really helps a lot in OTT streaming starting from easier content management, cache-efficient delivery, and it allows for a more aggressive buffer modelling and, thus, adaptation logic within the client enabling better Quality of Experience (QoE) for the end user. It seems like complexity (at the encoder) is not such much a concern as long as it scales with cloud deployments such as (e.g., the bitdash demo area shows some neat 4K/8K/HFR DASH demos which have been encoded with bitcodin). Closely related to 8K, there’s a new AVC amendment coming up covering 8K although one can do it already today (see before) but it’s good to have standards support for this. For HEVC, the JCT-3D/VC issued the FDAM4 for 3D Video Extensions and started with PDAM5 for Screen Content Coding Extensions (both documents being publicly available after an editing period of about a month).

And what about audio, the audio subgroup has decided that ISO/IEC DIS 23008-3 3D Audio shall be promoted directly to IS which means that the DIS was already at such a good state that only editorial comments are applied which actually saves a balloting cycle. We have to congratulate the audio subgroup for this remarkable milestone.

Finally, I’d like to discuss a few topics related to DASH which is progressing towards its 3rd edition which will incorporate amendment 2 (Spatial Relationship Description, Generalized URL parameters and other extensions), amendment 3 (Authentication, Access Control and multiple MPDs), and everything else that will be incorporated within this year, like some aspects documented in the technologies under consideration or currently being discussed within the core experiments (CE).
Currently, MPEG-DASH conducts 5 core experiments:
  • Server and Network Assisted DASH (SAND)
  • DASH over Full Duplex HTTP-based Protocols (FDH)
  • URI Signing for DASH (CE-USD)
  • SAP-Independent Segment SIgnaling (SISSI)
  • Content aggregation and playback control (CAPCO)
The description of core experiments is publicly available and, compared to the previous meeting, we have a new CE which is about content aggregation and playback control (CAPCO) which "explores solutions for aggregation of DASH content from multiple live and on-demand origin servers, addressing applications such as creating customized on-demand and live programs/channels from multiple origin servers per client, targeted preroll ad insertion in live programs and also limiting playback by client such as no-skip or no fast forward.” This process is quite open and anybody can join by subscribing to the email reflector.

The CE for DASH over Full Duplex HTTP-based Protocols (FDH) is becoming major and basically defines the usage of DASH for push-features of WebSockets and HTTP/2. At this meeting MPEG issues a working draft and also the CE on Server and Network Assisted DASH (SAND) got its own part 5 where it goes to CD but documents are not publicly available. However, I'm pretty sure I can report more on this next time, so stay tuned or feel free to comment here.

Friday, February 27, 2015

IEEE JSAC Special Issue: Video Distribution over Future Internet

Special issue on Video Distribution over Future Internet 

Extended Submission Deadline: May 1529, 2015

The current Internet is under tremendous pressure due to the exponential growth in bandwidth demand, fueled by the transfer of video consumption to online distribution, IPTV, streaming services such as Netflix, and from phone networks to videoconferencing and Skype-like video communications. The Internet has also democratized the creation, distribution and sharing of user-generated video contents through services such as YouTube, Vimeo or Hulu. The situation is further aggravated by the emerging trends of adopting higher definition video streams, requesting more and more bandwidth. Indeed, the Cisco Visual Networking Index (VNI) projects that video consumption will amount to 90% of the global consumer traffic by 2017. Another shift predicted by Cisco VNI is that most data communications will be wireless by 2018.

To cope with the bandwidth growth, the shift to wireless, and to solve other related issues (e.g., naming, security, etc) with the current Internet, new architectures for the future Internet have been proposed and prototyped. Examples include Content-Centric Networks (CCN) or Named Data Networking (NDN), or some content-based extensions to Software-Defined Networking (SDN), among others. None of these emerging architectures deals specifically with video distribution, as they need to support a wider range of services, but all would have to support videos in an efficient manner. Therefore, the study of video distribution over the future Internet is of primary importance: how well does future Internet architecture facilitate video delivery? What kind of video distribution mechanisms need to be created to run on the future Internet? How will video be supported in the wireless portion of the future Internet? Can the current video distribution mechanisms (such as end-to-end dynamic rate adaptation schemes) be used or even enhanced for the future Internet? What are subjective/objective metrics for performance measurement? How to provide real-time guarantees for live and interactive video streams?

While the topic is quite wide, we will narrow the focus of this special issue on the fundamental problems of video distribution and delivery in the future Internet. We invite submissions of high-quality original technical and survey papers, which have not been published previously, on video distribution in the future Internet, including the following non-exhaustive list of topics. Please note that all topics must be understood in the context of the future Internet as outlined above.
  • Network-assisted video distribution, network support for multimedia, specifically supporting wireless environments
  • New information-centric and software-defined architectures to support wired and wireless video streaming
  • Resource allocation for wired and wireless video distribution
  • Media streaming, distribution, and storage support in the future Internet
  • In-network caching/storage, named data retrieval, publish/subscribe for video distribution in wired and wireless networks
  • Next generation Content Delivery Networks (CDN)
  • Adaptive streaming and rate adaptation for video streaming in the future Internet for wired and wireless networks
  • Peer-to-peer aspects of video multimedia distribution, including scaling and capacity
  • QoS/QoE measurement and support for video distribution in the future Internet
  • User-generated content and social networks for multi-media
  • Video compression techniques explicitly supporting the future Internet
  • Big-Data mechanisms (say referral engines or content placement algorithms) for video content over future Internet
  • Social-aware video content distribution over future Internet
  • Integration of video distribution and multimedia computing over future Internet
  • Testbeds and measurements of video distribution over future Internet
  • Cost and economic models for video distribution over future Internet
  • Theoretical foundations for video distribution over future Internet, e.g., network coding, information theory, machine learning, etc
Special Issue Editors
  • Prof. Cedric Westphal, Huawei Innovations & UCSC, USA 
  • Prof. Tommaso Melodia, Northeastern University, Boston, MA, USA 
  • Prof. Christian Timmerer, Alpen-Adria-Universität Klagenfurt, Austria
  • Prof. Wenwu Zhu, Tsinghua University, Beijing, China
Important Dates
  • Paper Submission due: 05/29/2015
  • First review complete: 09/15/2015
  • Acceptance Notification: 11/15/2015
  • Camera-ready version: 12/15/2015
  • Publication date: Second Quarter 2016 
Manuscript submissions and reviewing process: All submissions must be original work that has not been published or submitted elsewhere. For submission format, please follow IEEE JSAC guidelines ( Each paper will go through a two-round rigorous reviewing process by at least three leading experts in related areas. Papers should be submitted through EDAS (