Joachim Köhler, Konstantin Biatov, Martha Larson, …


Automatic Generation of Meta data for Audio-Visual Content in the Context of MPEG-7

Video Proccessing anf Face Recognition DS

Video Proccessing anf Face Recognition DS



This paper describes our MPEG-7 project AGMA. AGMA stands for Automatic Generation of Metadata for Audio-visual content in the context of MPEG-7. The goal of the 3-year project funded by the German Federal Ministry of Education and Research is to develop new methods for automatic content analysis and to integrate these methods in a MPEG-7 compliant retrieval system. AGMA is divided into four components that deal with speech recognition, music recognition, face recognition and system design. The content analysis methods are developed and optimized on a multimedia database of material recorded in the German parliament. The content analysis is based on statistical methods including Hidden-Markov-Models (HMM) and Neural Networks (NN), which extract features that characterize the data and can be encoded in the MPEG-7 framework. MPEG-7 is the new standard for multimedia description interfaces. One of its main components is the representation of audio-visual metadata in XML. The paper presents background information on media archiving, the MPEG-7 standard, speech and image recognition, describes the German parliamentary database and relates preliminary recognition results.

KünstlerInnen / AutorInnen


Deutschland, 2000-2003

Partner / Sponsoren

Fraunhofer Institute for Media Communication; St. Augustin



Eingabe des Beitrags

Joachim Köhler, 15.06.2001



Ergänzungen zur Schlagwortliste