Tuesday, May 23, 2017

The Evolution of Programming Languages and Computer Architectures over the Last 50 Years

Prof. Niklaus Wirth

June 12, 2017, 16:00
Alpen-Adria-Universität Klagenfurt, E.2.42

Please register via martina@itec.aau.at

We recount the development of procedural programming languages and of computer architectures, beginning with Algol 60 and the main frame computers, and discuss the influence of the former on the latter. We point out the major innovative features of computers, and the main characteristics of languages. What makes languages high-level, and what caused their cancerous growth and overwhelming complexity? Are we stuck with the monsters, or is further, sound development still possible?

© Peter Badge/Typos1 – in coop. with HLFF - all rights reserved 2017
Niklaus Wirth is one of the most influential computer scientists ever. He is known first of all for his works in programming language and compiler design, but he has also contributed a lot to hardware and operating system design and software engineering in a general sense. He spent most of his working time as professor at the ETH Zürich, but spent also several years in outstanding research institutions in the USA (e.g. Xerox PARC) and Canada.

His best known programming language is Pascal. Pascal was published at the end of the sixties, at a time, when on the one side widely used but theoretically poorly founded languages (such as Fortran and Cobol) and on the other hand theoretically exaggerated and practically hardly useful languages (such as Algol-68) dominated the scene. Wirth succeeded with Pascal to find the happy medium. This was the first programming language 1) incorporating the sound theory of safe programming (as defined by E.W. Dijkstra, C.A. Hoare and others, including Wirth himself); 2) applying strict, static type checking; 3) providing a flexible system of recursive type constructors. In other words: Strictness, regarding syntax, but freedom in expressing semantics. In later languages Wirth adapted the concept of encapsulation and information hiding (Modula and Modula-2), and object-orientation (Oberon and Oberon-2) in a novel, clean and simple way. Oberon was not only the name of a language, but also of an extremely compact, but extendible operating system, enabling – among others – maybe the first efficient garbage collector of the world. He designed also a hardware architecture, best fitting for the requirements of code generation from compilers (the Lilith architecture) becoming thus a pioneer for later RISC architectures. He also designed a simple and compact language for hardware design (LOLA). The leading principle in all his work was the slogan taken from Albert Einstein: “Make it as simple as possible – but not simpler!”

Niklaus Wirth published over 10 books and numerous scientific papers. He was for a few years the most quoted computer scientist at all. He received practically all awards a computer scientists can get. First of all, the Turing Award, which is often called “the Nobel prize for computer scientists”. He is a member of the order Pour le mérite for science and art and of the German Academy of Sciences, he received the IEEE Computer Pioneer Award, the Outstanding Research Award in Software Engineering von ACM Sigsoft – and a lot of others. Niklaus Wirth is an excellent speaker; humble, wise and with a lot of sense of humor. This makes his talks for an unforgettable event for this audience. The Institute of Information Technology at the Klagenfurt University is highly honored and pleased that he accepted our invitation.

Monday, May 22, 2017

TNC17 presentation: Over-the-Top Content Delivery: State of the Art and Challenges

Over-the-Top Content Delivery: State of the Art and Challenges

Christian Timmerer (AAU/Bitmovin)

Abstract: Over-the-top content delivery is becoming increasingly attractive for both live and on-demand content thanks to the popularity of platforms like YouTube, Vimeo, Netflix, Hulu, Maxdome, etc. In this tutorial, we present state of the art and challenges ahead in over-the-top content delivery. In particular, the goal of this tutorial is to provide an overview of adaptive media delivery, specifically in the context of HTTP adaptive streaming (HAS) including the recently ratified MPEG-DASH standard. The main focus of the tutorial will be on the common problems in HAS deployments such as client design, QoE optimization, multi-screen and hybrid delivery scenarios, and synchronization issues. For each problem, we will examine proposed solutions along with their pros and cons. In the last part of the tutorial, we will look into the open issues and review the work-in-progress and future research directions.

TNC17 Networking Conference - The Art of Creative Networkinghttps://tnc17.geant.org/

Please feel free to contact me for details and/or I'd be happy to meet you at TNC17.

Friday, May 19, 2017

QoMEX'17 Special Session 2: Down the Rabbit Hole – General Aspects of VR and the Immersion Experience


QoMEX 2017
May 31 – June 2, 2017 in Erfurt, Germany


Picture from http://www.chalquist.com/fantastophobia.html

Chairs

  • Raimund Schatz, Austrian Institute of Technology (AIT), Austria
  • Christian Timmerer, Alpen-Adria-Universität (AAU) Klagenfurt, Austria
  • Judith Redi, Delft University of Technology, The Netherlands

Motivation

Currently, we witness a proliferation of new products and applications based on immersive media technologies – exemplified best by the current “VR hype” causing a flurry of new devices and applications to hit the market place. The potential of immersive media is large, however, investigation and application of QoE in this context is still in its infancy as many issues are not yet understood. Consequently, the multimedia-quality community faces new, rapidly moving research targets, resulting in the following overarching questions:
  • What is the interplay between concepts like Immersion, Presence, Interactivity, Multimedia Quality and User Experience in the context of emerging immersive applications and technologies?
  • What should be the role of QoE in this domain?
  • How to characterize, assess, model and manage QoE for immersive media-rich applications?
This special session aims to bring together researchers and practitioners to present and discuss quality-related issues and topics in the field of immersive media. The goal is to highlight the major challenges the multimedia quality community should target in this dynamically evolving domain and identify ways forward to addressing them effectively.

All QoMEX'17 special sessions can be found here.

Schedule & Format

Thursday, June 1, 2017, 13:00 -- 15:00
  1. Chenyan Zhang, Andrew Perkis and Sebastian Arndt, Spatial Immersion versus Emotional Immersion, Which is More Immersive?
  2. Conor Keighrey, Ronan Flynn, Siobhan Murray and Niall Murray, A QoE Evaluation of Augmented and Immersive Virtual Reality Speech & Language Assessment Applications 
  3. Raimund Schatz, Andreas Sackl, Christian Timmerer and Bruno Gardlo, Towards Subjective Quality of Experience Assessment for Omnidirectional Video Streaming
  4. Ashutosh Singla, Stephan Fremerey, Werner Robitza and Alexander Raake, Measuring and Comparing QoE and Simulator Sickness of Omnidirectional Videos in Different Head Mounted Displays
  5. Yashas Rai, Patrick Le Callet and Philippe Guillotel, Which saliency weighting for omni directional image quality assessment?
  6. Evgeniy Upenik, Martin Rerabek and Touradj Ebrahimi, On the Performance of Objective Metrics for Omnidirectional Visual Content
The special session format will be as follows:
  • 15min slots per presentation (12-13min talk, 1min for 1 question, 1-2 min time for speaker change)
  • 30min for panel discussion

Panel Discussion

For the panel discussion, our aim is to address the following questions:
  1. What is your understanding of a fully immersive experience (provide a definition or propose keywords to characterize immersive experiences)?
  2. Which aspects should the QoMEX community focus on?
  3. What can we learn and which knowledge can we re-use from the 3D and HDR research experiences?
... and we'd like to solicit also input from the community at large. Therefore, we setup a shared Google doc which is available here, asking for YOUR input: http://bit.ly/QoMEXSpS2.

Come and join us on the journey down the rabbit hole which eventually will lead to wonderland.

Monday, May 8, 2017

Joint QUALINET-VQEG team on Immersive Media (JQVIM)

QUALINET and VQEG have an ambition to increase the collaboration between the organizations. As a pilot effort we are hereby proposing a Joint Qualinet-VQEG team on Immersive Media (JQVIM). The actual collaboration will be between the Task Force: "Immersive Media Experiences (IMEx)" of Qualinet and the Working Group "Immersive Media Group (IMG)" of VQEG.

The initial goals for JQVIM are:
  • Collecting and producing open source immersive media content and data set
  • Establishing and recommending best practices and guidelines
  • Collecting and producing open source immersive media tools
  • Survey of standardisation activities
Anybody who's interested joining this effort, please contact me. Interestingly, this effort is related to my previous post where the VR Industry Forum (VRIF) calls for VR360 Content and also MPEG established an Ad-Hoc Group (AhG) on Immersive Media Quality Evaluation with similar mandates.

Friday, May 5, 2017

VR Industry Forum (VRIF) calls for VR360 Content



1 Introduction

The VR Industry Forum (VRIF) is a cross-industry Forum that has as its purpose “to further the widespread availability of high quality audiovisual VR experiences, for the benefit of consumers”. VRIF builds on standards created by formal Standards Development Organizations, such as MPEG, and seeks to use these standards to enable the interoperable deployment of high-quality VR360 services.

VRIF’s initial focus is on Three Degrees-of-Freedom (3-DoF) video and audio. VRIF is now calling for content, to build a content library with material for the purpose of providing public test vectors that may be used by content providers, service providers as reference, and by manufacturers of applications and devices to test implementations against the VRIF guidelines. VRIF’s hope is to build a library of content that can be widely used by Industry to test and promote VR services.

VRIF prefers to receive VR360 content accompanied with 3D spatial audio. If you are willing to contribute other forms of content that may be relevant to VRIF, please contact us.

2 License

VRIF calls for content with as few restrictions as few restrictions as possible. It must be possible to use the content for the testing purposes within VRIF, and for demonstration at private and public events by VRIF members. It is also highly desirable for the content to available for general research, development, and for demonstration of audio/visual or image signal processing technology. It must be possible to extract single frame images from the content for inclusion in technical publications.

VRIF prefers content that is licensed a Creative Commons License as documented here: https://creativecommons.org/share-your-work/licensing-types-examples/

If you are considering making content available but would like to impose a few specific restrictions, then VRIF is willing to consider such restrictions as long as these are consistent with VRIF’s intended use.

3 Use Case

VRIF develops use cases that drive our Guidelines. The relevant aspects of the current use case are provided in this Section 3. The derived requirements for the test material are provided in Section 4.

A service provider offers a library of 360 A/V content. The library is a mixture of content formats from user generated content, professionally generated studio content, VR documentaries, promotional videos, as well as highlights of sports events. The content enables changing the field-of-view based on user interaction.

The service provider wants to create a portal to distribute the content to a multitude of devices that support 360-A/V and VR processing and rendering. The service provider wants to target two types of applications:
  • Primarily, viewing in an HMD with head motion tracking. 
  • Additionally, the content provider may enable viewing on a “flat screen” with the user selecting the field-of-view through manual interaction (e.g. mouse input or swiping). 
The service provider expects different types of consumption and rendering devices with different capabilities in terms of decoding and rendering. The service provider has access to the original footage of the content and is permitted to encode and transcode to appropriate distribution formats.

The footage includes different types of 360 A/V VR content, such as: 

For video: One of the three
  • Pre-stitched monoscopic video, i.e. a (360 and possibly less than 360) spherical video without depth perception, with Equirectangular Projection (ERP).
  • Pre-stitched stereoscopic video, i.e., a spherical video using a separate input for each eye, typically with ERP. 
  • Fish-eye content, typically user-generated 
For video: Original content
  • original content, either in on original uncompressed domain or in a high-quality mezzanine format.
  • Basic VR content: 4k x 2k in equirectangular projection (ERP), 8 or 10bit, BT.709, 30fps and up.
  • High-quality content: 8k x 4k (ERP), 10 bit, possibly advanced transfer characteristics and color transforms, sufficiently high frame rates, etc. 
Sufficient metadata is provided to appropriately describe the A/V content

For audio
  • Spatial audio content for immersive experiences:
    • Channel-based audio
    • Object-based audio
    • Scene-based audio
    • Or a combination of the above 
  • Sufficient metadata for encoding, decoding and rendering the spatial audio scene permitting dynamic interaction with the content. This may include additional metadata that is also used in regular TV applications, such as for loudness management. 
  • Diegetic and non-diegetic audio content.

4 Test Material Requirements 

We are seeking content with the following characteristics:
  • Sequences have zero or few issues in the original form (stitching, noise)
  • Content: 
    • Basic Video VR content: approximately 4k x 2k (ERP, 8 or 10bit, BT.709, as low as 25/30fps, but also 50/60 fps
    • High-quality Video Content: approximately 6k x 3k, 8k x 4k and up (ERP), 10 bit, possibly advanced transfer characteristics and colour transforms, possibly even higher frame rates, etc.
    • Monoscopic or Stereoscopic
  • Audio along with this:
    • preferably 3D spatial audio, timely synced and spatially aligned with the video provided in the following formats:
      • Channel-based audio 
      • Object-based audio
      • Scene-based audio
      • Or a combination of the above
    • Sufficient metadata for encoding, decoding and rendering the spatial audio scene permitting dynamic interaction with the content. The metadata may include additional metadata that is also used in regular TV applications, such as for loudness management. 
    • Diegetic and non-diegetic audio content.
  • Duration: between 30 seconds and 2 minutes.
  • Type of content:
    • Sports
    • Live events (e.g. music / concerts o Outdoor scenery (nature or urban)
    • professionally produced indoor
  • Artistic characteristics:
    • natural and synthetically generated (but still coded as video)
    • moving or static ROI
    • preference for fixed camera; optionally moving camera
  • Packaging
    • Video: Raw or lightly compressed mezzanine format (to be worked out)
    • Audio: uncompressed produced Audio assets, or lightly compressed 

If you have content that does not meet all requirements, please get in touch as we are interested in understanding if would still be useful our purposes.

5 Credits

VRIF is happy to acknowledge sponsors and contributors of the content by providing credits, in one or more of the following ways:
  • on the VRIF website, 
  • along with the hosting (e.g., the download page) 
  • modestly embedded in the content itself, in a way that doesn’t detract from that content. 

6 Contacts

For questions or to respond to this call, please contact:
  • Rob Koenen: rob.koenen@tno.nl
  • Thomas Stockhammer: tsto@qti.qualcomm.com 
  • VR Industry Forum: info@vr-if.org

Thursday, April 27, 2017

ACM MMSys'17 special session paper accepted: "Towards Bandwidth Efficient Adaptive Streaming of Omnidirectional Video over HTTP: Design, Implementation, and Evaluation"


Towards Bandwidth Efficient Adaptive Streaming of Omnidirectional Video over HTTP
Design, Implementation, and Evaluation 

Mario Graf (Bitmovin), Christian Timmerer (AAU/Bitmovin), and Christopher Mueller (Bitmovin)


Abstract: Real-time entertainment services such as streaming audio- visual content deployed over the open, unmanaged Internet account now for more than 70% during peak periods. More and more such bandwidth hungry applications and services are proposed like immersive media services such as virtual reality and, specifically omnidirectional/360-degree videos. The adaptive streaming of omnidirectional video over HTTP imposes an important challenge on today’s video delivery infrastructures which calls for dedicated, thoroughly designed techniques for content generation, delivery, and consumption.

This paper describes the usage of tiles — as specified within modern video codecs such HEVC/H.265 and VP9 — enabling bandwidth efficient adaptive streaming of omnidirectional video over HTTP and we define various streaming strategies. Therefore, the parameters and characteristics of a dataset for omnidirectional video are proposed and exemplary instantiated to evaluate various aspects of such an ecosystem, namely bitrate overhead, bandwidth requirements, and quality aspects in terms of viewport PSNR. The results indicate bitrate savings from 40% (in a realistic scenario with recorded head movements from real users) up to 65% (in an ideal scenario with a centered/fixed viewport) and serve as a baseline and guidelines for advanced techniques including the outline of a research roadmap for the near future.

ACM MMSys 2017http://mmsys17.iis.sinica.edu.tw/

Please feel free to contact me for details and/or I'd be happy to meet you at ACM MMSys'17.

ACM MMSys'17 demo paper accepted: "AdViSE: Adaptive Video Streaming Evaluation Framework for the Automated Testing of Media Players"

AdViSE: Adaptive Video Streaming Evaluation Framework for the Automated Testing of Media Players

Anatoliy Zabrovskiy (Petrozavodsk State University), Evgeny Kuzmin (Petrozavodsk State University), Evgeny Petrov (Petrozavodsk State University), Christian Timmerer (AAU/Bitmovin, and Christopher Mueller (Bitmovin)

Abstract: Today we can observe a plethora of adaptive video streaming services and media players which support interoperable formats like DASH and HLS. Most of the players and their rate adaptation algorithms work as a black box. We have developed a system for easy and rapid testing of media players under various network scenarios. In this paper, we introduce AdViSE, the Adaptive Video Streaming Evaluation framework for the automated testing of adaptive media players. The presented framework is used for the comparison and testing of media players in the context of adaptive video streaming over HTTP in web/HTML5 environments.

The demonstration showcases a series of experiments with different media players under given context conditions (e.g., network shaping, delivery format). We will also demonstrate the real-time capabilities of the framework and offline analysis including several QoE metrics with respect to a newly introduced bandwidth index.

ACM MMSys 2017http://mmsys17.iis.sinica.edu.tw/

Please feel free to contact me for details and/or I'd be happy to meet you at ACM MMSys'17.