Showing posts with label multimedia metadata. Show all posts
Showing posts with label multimedia metadata. Show all posts

Thursday, May 20, 2010

11th Multimedia Metadata Community Workshop on Interoperable Social Multimedia Applications (WISMA 2010)

On May 19-20, 2010 I've attended the 11th Multimedia Metadata Community Workshop on Interoperable Social Multimedia Applications (WISMA 2010) in Barcelona, Spain. The proceedings are available online on CEUR-WS.org and Twitter stream in case you'd like to review it.

I had two presentation which I'd like to provide here. The first one was on A Metadata Model for Peer-to-Peer Media Distribution:
Abstract: In this paper we describe a metadata solution for a Peer-to-Peer (P2P) content distribution system termed NextShare. We outline the key motivating factors for our approach, detail the overall generic architecture we have developed and present the workflow for delivering metadata through Peer-to-Peer based content distribution. The paper also presents the metadata model we have developed and we describe in detail how all the content can be packetized and distributed using NextShare. Finally, a description of the core and optional metadata attributes which may be utilized within the system is provided.
The second presentation provided an answer for the following question: Are Sensory Effects ready for the World Wide Web?
Abstract. The World Wide Web (WWW) is one of the main entry points to access and consume Internet content in various forms. In particular, the Web browser is used to access different types of media (i.e., text, image, audio, and video) and on some platforms is the only way to access the vast amount of information on the Web. Recently, it has been proposed to stimulate also other senses than vision or audition while consuming multimedia content through so-called sensory effects, with the aim to increase the user’s Quality of Experience (QoE). The effects are represented as Sensory Effects Metadata (SEM) which is associated to traditional multimedia content and is rendered (synchronized with the media) on sensory devices like fans, vibration chairs, lamps, etc. In this paper we provide a principal investigation of whether the sensory effects are ready for the WWW and, in anticipation of the result, we propose how to embed sensory effect metadata within Web content and the synchronized rendering thereof.

Monday, February 1, 2010

WISMA 2010 - A Multimedia Metadata Community Workshop in Barcelona

=====================================================================
CALL for PAPERS
Workshop on Interoperable Social Multimedia Applications (WISMA 2010)
11th International Workshop of the Multimedia Metadata Community

Submission due: 28th February 2010 - Workshop dates: 19th-20th May 2010
Workshop venue: Universitat Politècnica de Catalunya, Barcelona (Spain)

In the Web 2.0, a growing amount of multimedia content is being shared on Social Networks. Due to the dynamic and ubiquitous nature of this content (and associated descriptors), new interesting challenges for indexing, access, and search and retrieval have arisen. In addition, there is a growing concern on privacy protection, as a lot of personal data is being exchanged. Teenagers (and even younger kids), for example, require special protection applications; while adults are willing to have a higher control over the access to content. Furthermore, the integration of mobile technologies with the Web 2.0 applications is also an interesting area of research that needs to be addressed; not only in terms of content protection, but also considering the implementation of new and enriched context-aware applications. Finally, social multimedia is also expected to improve the performance of traditional multimedia information search and retrieval approaches by contributing to bridge the semantic gap. The integration of these aspects, however, is not trivial and has created a new interdisciplinary area of research. In any case, there is a common issue that needs to be addressed in all the previously identified social multimedia applications: the interoperability and extensibility of their applications. Thus, the workshop is particularly interested in research contributions based on standards.

Recommended topics include, but are not limited to, the following:
• Privacy in social networks
• Access control in social networks
• Social media analysis
• Social media retrieval
• Context-awareness in social networks
• Mobile applications scenario
• Social networks ontologies and interoperability
• Security and privacy ontologies
• Content distribution over social networks
• Multimedia ontologies and interoperability
• Multimedia search and retrieval
• Semantic metadata management
• Collaborative tagging
• Interaction between access control and privacy policies
• Social networks and policy languages
• Policy management

Research Papers: Papers should describe original and significant work in the research and industrial practice of related topics. (i) Long papers: up to 8 pages, will normally be particularly focused on research studies, applications and experiments (ii) Short papers: up to 4 pages, will be particularly suitable for reporting work-in-progress, interim results, or as a position paper submission.
Applications and Industrial Presentations: Proposals for presentations of applications and tools, including reports on the application and utilisation of tools, industrial practices and models, or tool/system demonstrations. Abstract: 2 pages.

All submissions and proposals are to be in English and submitted in PDF format at the WISMA paper submission web site (http://www.easychair.org/conferences/?conf=wisma2010) on or before 28th February 2010. Papers should be formatted according to LNCS style (http://www.springer.com/computer/lncs?SGWID=0-164-7-72376-0). The workshop proceedings are to be published as a volume at CEUR Workshop Proceedings (http://ceur-ws.org).

General Chair: Jaime Delgado (UPC, Spain).

International Programme Committee (provisional):
Alessandro Vinciarelli (Idiap, Switzerland), Anna Carreras (Universitat Politècnica de Catalunya, Spain), Ansgar Scherp (University of Koblenz-Landau, Germany), Bill Grosky (University of Michigan, USA), Britta Meixner (University of Passau, Germany), Christian Guetl (Graz University of Technology, Austria), Christian Timmerer (Alpen-Adria-University Klagenfurt, Austria), Chris Poppe (Ghent University - IBBT, Belgium), Dominik Renzel (RWTH Aachen University, Germany), Frédéric Dufaux (EPFL, Switzerland), Giuseppe Amato (ISTI Pisa, Italy), Günther Hölbling (University of Passau, Germany), Harald Kosch (University of Passau, Germany), Herve Bourlard (Idiap, Switzerland), Jaime Delgado (Universitat Politècnica de Catalunya, Spain), Laszlo Böszörmenyi (Klagenfurt University, Austria), Lionel Brunie (INSA de Lyon, France), Marc Spaniol (MPI - Saarbrücken, Germany), Markus Strohmaier (Know Center Graz, Austria), Mathias Lux (Klagenfurt University, Austria), Michael Granitzer (Know Center Graz, Austria), Oge Marques (Florida Atlantic University, USA), Ralf Klamma (RWTH Aachen University, Germany), Richard Chbeir (Bourgogne University, France), Romulus Grigoras (ENSEEIHT, France), Ruben Tous (Universitat Politècnica de Catalunya, Spain), Savvas Chatzichristofis (Democritus University of Thrace, Greece), Stéphane Marchand Maillet (UniGE, Switzerland), Timo Ojala (University of Oulu, Finland), Touradj Ebrahimi (EPFL, Switzerland), Vincent Charvillat (ENSEEIHT, France), Vincent Oria (NJIT, USA), Werner Bailer (Joanneum Research Graz, Austria), Yiwei Cao (RWTH Aachen University, Germany), Yu Cao (California State University, Fresno, USA).

Supported by:
Multimedia Metadata Community
Universitat Politècnica de Catalunya BARCELONATECH
=====================================================================

Wednesday, December 2, 2009

IEEE Computing Now Dec'10 Theme: Multimedia Metadata and Semantic Management

I've co-edited the December 2010 theme of IEEE Computing Now on Multimedia Metadata and Semantic Management.

Guest Editors' Introduction • Harald Kosch and Christian Timmerer • December 2009

Multimedia semantics is more than developing ontologies to describe the nature of multimedia content. It’s the key research area for interoperable, intelligent access to and management of multimedia materials.

There are many metadata standards. More than 10 organizations vie for leadership in content description, including the Dublin Core Metadata Initiative, ISO/IEC’s MPEG working group, and the World Wide Web Consortium (W3C). For a complete list, see the “Semantic Standards” sidebar.

Recent studies show that this diversity is a major hindrance to a common multimedia semantic understanding. So, the first challenge to address in this area is the heterogeneity in metadata description and query languages. We must build better bridges across semantic gaps. We also need to cleverly aggregate and concisely present results for users while providing security and related access-control techniques appropriate to multimedia content. Other challenges include synchronizing metadata information to media and vice versa and managing this relationship throughout the metadata life cycle.

Effective multimedia management must span the metadata life cycle—from its creation through processing, storage, distribution, and deployment—and work whether the metadata is tightly connected with or independent of the media it describes.

Finally, we need better integration of situational context. This includes not only domain knowledge, but also legal and cultural issues, metadata and semantic quality, compression and encryption techniques.

Combining the Semantic Web with multimedia semantics offers interesting research opportunities for social-information management, such as collaborative multimedia tagging, semantics-aware social-media engineering, and multimedia mash-ups. These opportunities were well represented at the 2009 International Conference on Semantic and Digital Media Technologies (SAMT 09, www.samt2009.org). The Virtual Campfire exemplifies emerging systems for integrating social multimedia. This project, led by Ralf Klamma at the RWTH Aachen (www.dbis.rwth-aachen.de/lehrstruhl/projects/virtualCampfire), establishes an advanced framework to create, search, and share multimedia artifacts with context awareness across communities.

Selected Articles on Multimedia Semantics

This month’s theme includes the following featured articles:

In “Managing and Querying Efficiently Distributed Semantic Multimedia Metadata Collections” (IEEE MultiMedia, Oct.–Dec. 2009, pp. 12–20, special issue on Multimedia Metadata and Semantic Management), Sébastien Laborie, Ana-Maria Manzat, and Florence Sèdes propose an original model of a centralized metadata resume. Their resume is a concise version of the whole metadata, and it can link to some desired multimedia content on remote servers and databases. The authors also propose an automatic construction process for the metadata resume. They demonstrate the framework with current Semantic Web technologies for representing and querying semantic metadata. Their experimental results show the benefits of their approach.

In “Semantic MPEG Query Format Validation and Processing,” also from IEEE MultiMedia’s special issue (Oct.–Dec. 2009, pp. 22–33) Mario Doeller, Ruben Tous, Matthias Gruhne, Miran Choi, Tae-Beom Lim, Jaime Delgado, and Armelle Yakou describe the semantic validation of the MPEG Query Format (MPQF) and the implementation of an MPQF engine on top of an Oracle database management system. MPQF enables interoperable querying among heterogeneous databases that use different metadata standards for describing multimedia content. This article introduces methods for evaluating MPQF semantic-validation rules not expressed by syntactic means within the XML Schema used by the databases. The authors highlight a prototype implementation of an MPQF-capable processing engine using QueryByFreeText, QueryByXQuery, QueryByDescription, and QueryByMedia query types on a set of MPEG-7 based image annotations.

In “Using Social Networking and Collections to Enable Video Semantics Acquisition,” a third article from IEEE MultiMedia’s special issue (Oct.–Dec. 2009, pp. 52–60), Stephen Davis, Ian Burnett, and Christian Ritz consider the multimedia value chain’s first elements: media production, acquisition, and metadata gathering. The authors bring together methods from video content annotation and social networking to solve problems associated with gathering metadata that describes user interactions with and opinions about video content. Then they aggregate individual users’ interaction metadata to form semantic metadata for a given video. The authors have successfully implemented their techniques in a custom Flex application based on the popular Facebook API.

In “The Ariadne Infrastructure for Managing and Storing Metadata” (IEEE Internet Computing, July/Aug. 2009, pp. 18–25) issue of Stefaan Ternier, Katrien Verbert, Gonzalo Parra, Bram Vandeputte, Joris Klerkx, Erik Duval, Vicente Ordóñez, and Xavier Ochoa analyze the standards-based Adriane infrastructure for managing learning objects in an open, scalable architecture. The core infrastructure comprises several components such as the repository, federated search engine, finder, harvester, and metadata validation service. This infrastructure enables the integration of learning objects in multiple, distributed repository networks. Finally, the authors review several architectural patterns that they found useful in searching repositories in this area—namely, federated search, search on harvest, search adapter, and harvest adapter. It would be interesting to see this infrastructure working multimedia metadata.

In “Data-Sharing P2P Networks with Semantic Approximation Capabilities” (IEEE Internet Computing, Sept./Oct. 2009, pp. 60–70), Federica Mandreoli, Riccardo Martoglia, Simona Sassatelli, and Wilma Penzo tackle the new information-retrieval challenges posed by heterogeneous data representations within peer-to-peer systems. The authors suggest leveraging the presence of semantic approximations between peers’ schemas to improve query routing. Their approach identifies the peers that best satisfy a user’s query and ranks the answers through a mechanism that promotes the most semantically relevant results. Their work applies to a scenario in which various actors in a multimedia chain-of-value network (such as network and telecom operators and service providers) must actively collaborate.

In “3D Media and the Semantic Web” (IEEE Intelligent Systems, Mar./Apr. 2009, pp. 90–96), Michela Spagnuolo, and Bianca Falcidieno introduce ways to integrate 3D media with Semantic Web technologies. Tools for coding, extracting, sharing, and retrieving the semantic content of 3D media are still far from satisfactory. The authors describe a means for embedding 3D into the Semantic Web, documenting and annotating 3D media for sharing, understanding its meaning, and retrieving it on the basis of content.

Related Resources

Numerous other articles from a wide range of journals and conferences deal with topics related to Multimedia Semantics; see our accompanying list of recommendations.

We’d also like to know what you think about Multimedia Semantics, so take this month’s poll and voice your opinion.

Harald Kosch is a full professor at the Faculty of Informatics and Mathematics, University of Passau, Germany. His research interests include multimedia metadata, multimedia databases, middleware, and Internet applications. Kosch has a PhD in computer science from Ecole Normale Supérieure de Lyon, France. Contact him at Harald.Kosch@uni-passau.de.


Christian Timmerer is an assistant professor in the Department of Information Technology, Multimedia Communication Group, Klagenfurt University, Austria. His research interests include the transport of multimedia content, multimedia adaptation in constrained and streaming environments, distributed multimedia adaptation, and Quality of Service/Quality of Experience. Timmerer has a PhD in applied informatics from Klagenfurt University. Contact him at christian.timmerer@itec.uni-klu.ac.at. Publications and MPEG contributions can be found under http://research.timmerer.com, follow him on http://www.twitter.com/timse7, and subscribe to his blog http://blog.timmerer.com.

Friday, July 24, 2009

Future Internet: Special Issue "Metadata and Markup"

This special issue of the journal Future Internet seeks papers reporting high quality theoretical or practical work on Metadata and Markup. As data about data, metadata describes information about documents, events, locations or people but also addresses qualitative aspects, language, and include information about context or conditions of use. It may be used for naming, describing, cataloguing, and indication ownership of a resource. Metadata helps to facilitate the understanding and the management of data objects. While the metadata describes characteristics about the data, the markup identifies the specific type of data content and acts as a container for that document instance. Mark-up languages allow for the inclusion of many types of metadata ranging from simple dates or keywords up to highly-granular information such as Dublin Core or e-GMS.

We are looking for high-quality, original papers on any aspect of Metadata and Markup including topics such as standards for supporting knowledge markup, e.g., RDFa, microformats, GRDDL, multimedia annotation (e.g., by using MPEG-7), collaborative, shared tagging and annotation, semantic annotation in Semantic Wikis, semantic authoring and publishing, document engineering, deriving semantics from document structure and content, ontology-based authoring and markup, knowledge markup in the Semantic Web, using semantic annotations to define knowledge, integrated software architecture based on semantic annotation, annotation of software components, linguistic aspects of semantic annotations, text mining for creating knowledge markup, mining semantic information from blogs, forums or news sources, collaborative, shared tagging and annotation, evaluation of annotation frameworks, deriving formal semantics from (flat or hierarchical) tagging systems, vocabularies and ontologies for semantic authoring and annotation, tools for supporting knowledge markup, semantic annotation, sematic authoring, etc.

Andreas Dengel, Ph. D.
Guest Editor

Submission Information

All papers should be submitted to futureinternet@mdpi.org. To be published continuously until the deadline and papers will be listed together at the special issue website.

Submitted papers should not have been published nor be under consideration for publication elsewhere. All papers are refereed through a peer-review process. A guide for authors is available on the Instructions for Authors page. Future Internet is a new international, peer-reviewed, quarterly open access journal published by Molecular Diversity Preservation International (MDPI).

Open Access publication is free of charge in the first few issues to be published in 2009.

Keywords

  • metadata and the web
  • semantics, semantic web
  • metadata capture and creation
  • metadata lifecycle
  • metadata schemes and ontologies
  • defition of metadata

Friday, March 20, 2009

9th Workshop on Multimedia Metadata (WMM'09)

Mathias already blogged about it but I also would like to give a short summary about this event. This time the workshop of the multimedia metadata community took place in Toulouse co-located with CORESA. The final program includes papers related to content-based multimedia retrieval & metadata, mobile services, multimedia metadata management, and a doctoral symposium. Proceedings are online available here. The keynote was from Timo Ojala (Oulu university) about case studies on context-aware mobile multimedia services. I've also presented a paper and gave a keynote - together with Stephane Pateux - about MPEG standards: where are we today? My part (i.e., systems) of the presentation I've included here...