Projects funded by the DFG

Surveillance Videos meet Wearable Cameras on the Cloud
Duration: 15.03.2019 - 14.03.2022
funded by the Deutsche Forschungsgemeinschaft (DFG)
Partners: The Hebrew University of Jerusalem, Israel
Department of Computer Science and Information Technology, College of Science & Technology, Palästina
In this project we will develop a system that is able to simultaneously detect and track persons and their activities on videos from static and moving surveillance cameras. Non-overlapping views, where a person leaves the scope of one camera and reenters the scope of another camera, are a major challenge for person re-identification. Wearable cameras complicate the problem further through their abrupt movement patterns, their ever-changing position and change of light conditions. Furthermore, we investigate the simultaneous calibration across all cameras in the system and extend our prior research on video synopsis to multi-camera scenarios. Finally, the raw and processed video streams are provided on a modern cloud infrastructure.

Augmented Synopsis of Surveillance Videos in Adaptive Camera Networks
Duration: 1.10.2015 - 30.9.2017
funded by the Deutsche Forschungsgemeinschaft DFG
Partners: The Hebrew University of Jerusalem
Department of Computer Science and Information Technology, College of Science & Technology, Palästina
In this project we will develop a novel system which will pave the way for a new kind of smart video surveillance networks.
The video streams from surveillance cameras will feed into servers which will perform real-time object detection and recognition.

SynergiesOfTrackingAndGait
Duration: 1.3.2015 - 28.2.2018
Laufzeit: 1.3.2015 - 28.2.2018
funded by the Deutsche Forschungsgemeinschaft DFG
In this project, synergies of two research areas will be used in order to obtain completely new insights as well as new algorithms and methods for identifying a person in realistic surveillance scenarios. The research areas underlying this project are people identification based on gait as well as other methods for people tracking from video data. The combination of these two research areas and the synergies between the two topics shall be exploited to enable new security systems.

Projects funded by the government

e-freedom:
facial recognition in public areas consistent with fundamental rights
Duration: 01.10.2018 – 30.09.2020
Partner: UNISCON GmbH, TÜV SÜD Digital Service GmbH, Axis Communications GmbH
funded by Bayerisches Staatsministerium für Wirtschaft, Energie und Technologie
Partner: UNISCON GmbH, TÜV SÜD Digital Service GmbH, Axis Communications GmbH
Key objective: The tense situation of public safety suggests more and more video surveillance in public areas. Even the technical possibilities of facial recognition are considered to be able to automatically recognize suspects identified as "potential attackers". The unfounded storage and processing of such data is in terms of data protection law, however, controversial. Even the knowledge of potential surveillance influences the behavior of the affected citizens. The lower-threshold the investigative methods are, the greater the impact on the behavior of the citizens. The project investigates how facial recognition in public areas has to be designed in a technical way, that such surveys can only be conducted on an ad hoc basis and that the respectable citizen can feel as free as if there were no surveillance cameras. The overall objective of the project is to investigate whether a video surveillance conforming to fundamental rights with face recognition in public areas is technically possible. The technical research objectives are to clarify the scheme and effectiveness of a technical seal as well as explore the ways in which errors in identifying persons ("false-positives") can be made so unlikely that the gains in privacy brought about by the seal not be lost by the revelations that come with false alerts.


VR-Based Training for EOD VR-Based Training for Explosive Ordnance Disposal (EOD)
Duration: 01.09.2018 – 31.10.2020
funded by the Bundesministerium für Wirtschaft und Energie (BMWi),
Zentrales Innovationsprogramm Mittelstand (ZIM)
Partner: EMC Kampfmittelbeseitigungs GmbH
The objective of this project is the development of a VR-based training tool for the simulation of realistic explosive ordnance disposal missions including the modeling of current defusing methods and ignition mechanisms. The project will deploy in a CAVE (Cave Automatic Virtual Environment) that is to be adapted for multi-user interaction scenarios, such that instructor and student can interact with objects and each other in a multimodal fashion. To this end, methods of realizing distinct visual perspectives (projections) into the virtual environment as well as methods of multi-user interaction in different levels of precision will be researched and applied. The implemented methods may be integrated as part of a CAVE-to-HMD infrastructure or a multi-user CAVE.

Datengetriebene Wertschöpfung von Multimedia Content Datengetriebene Wertschöpfung von Multimedia Content
Duration: 01.01.2017 - 30.09.2018
funded by the Bayerische Staatsministerium für Wirtschaft und Medien, Energie und Technologie
Partner: ProSiebenSat.1 Media SE and munich media intelligence
Zentrales Ziel des Projekts ist die Entwicklung von Methoden im Bereich der datengetriebenen Verarbeitung von Multimedia Content. Dies geschieht durch die Erforschung, Anwendung und Anpassung von Methoden aus dem Bereich der künstlichen Intelligenz und des Maschinellen Lernens mit dem Ziel sie für datengetriebene Anwendungen im Multimedia Umfeld einsetzen zu können. Viele zentrale Algorithmen in dem Bereich von Multimodaler Bild- und Videoverarbeitung befinden sich derzeit in einem wenig ausgereiften Entwicklungsstadium. Für einen Einsatz in einem wirtschaftlichen Kontext ist daher eine vorangestellte industrielle Forschungsphase unumgänglich. Die zu entwickelnden Methoden können potentiell jedoch diverse Use Cases unterstützen, welchen allen eine automatisierte Identifikation, Metadatenextraktion und algorithmische Verwertung von Big Data Multimedia Content zugrunde liegt.

EU Projects

iHEARu : Intelligent systems' Holistic Evolving Analysis of Real-life Universal speaker characteristics
FP7 ERC Starting Grant
Duration: 01.01.2014 - 31.12.2018

The iHEARu project aims to push the limits of intelligent systems for computational paralinguistics by considering Holistic analysis of multiple speaker attributes at once, Evolving and self-learning, deeper Analysis of acoustic parameters - all on Realistic data on a large scale, ultimately progressing from individual analysis tasks towards universal speaker characteristics analysis, which can easily learn about and can be adapted to new, previously unexplored characteristics.


HOL-I-WOOD PR : Holonic Integration of Cognition, Communication and Control for a Wood Patching Robot
EU FP7 STREP
Duration: 01.01.2012 - 31.12.2014
Partners: Microtec S.R.L, Italy; Lulea Tekniska Universitet, Sweden; TU Wien, Austria; Springer Maschinenfabrik AG, Austria; Lipbled, Slovenia; TTTech Computertechnik AG, Austria
Repair and patching of resin galls and lose dead knots is a costly and disruptive process of inline production in timber industry. The human workforce involved in these production tasks is hard to be replaced by a machine. Another request for human recognition and decision-making capabilities, occurring at a previous stage of the production line, is the detection and classification of significant artefacts in wooden surfaces. This project proposes a holonic concept that subsumes automated visual inspection and quality/artefact classification by a skilled robot visually guided and controlled by non-linear approaches that combine manipulation with energy saving in trajectory planning under real-time conditions – enabling the required scalability for a wide range of applications. The interaction of these holonic sub-systems is implemented in agent technology based on a real-time communication concept while fusing multi-sensoric data and information at different spatial positions of the production line. The feasibility of inter-linking independent autonomous processes, i.e. agents for inspection, wood-processing, transport (conveying) to repair by a patching robot, is demonstrated by a pilot in a glue lam factory. A mobile HMI concept makes interaction with the machine park easy to control, reliable and efficient, while at the same time increasing the safety for workers within a potentially dangerous working environment of a glue lam factories and saw mills.


ASC-Inclusion: Integrated Internet-based Environment for Social Inclusion of Children with Autism Spectrum Conditions (ASC)
EU FP7 STREP
Duration: 01.11.2011 - 31.12.2014
Role: Coordinator, Coauthor Proposal, Project Steering Board Member, Workpackage Leader
Partners: TUM, University of Cambridge, Bar Ilan University, Compedia, University of Genoa, Karolinska Institutet, Autism Europe
The ASC-Inclusion project is developing interactive software to help children with autism understand and express emotions through facial expressions, tone-of-voice and body gestures. The project aims to create and evaluate the effectiveness of such an internet-based platform, directed for children with ASC (and other groups like ADHD and socially-neglected children) and those interested in their inclusion. This platform will combine several state-of-the art technologies in one comprehensive virtual world, including analysis of users' gestures, facial and vocal expressions using standard microphone and webcam, training through games, text communication with peers and smart agents, animation, video and audio clips. User's environment will be personalized, according to individual profile & sensory requirements, as well as motivational. Carers will be offered their own supportive environment, including professional information, reports of child's progress and use of the system and forums for parents and therapists.


ALIAS: Adaptable Ambient Living Assistant
AAL-2009-2-049
Duration: 01.07.2010 - 30.06.2013
Partners: Cognesys, Aachen; EURECOM, Sophia-Antipolis, France; Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.; Guger Technologies, Graz, Austria; MetraLabs, Ilmenau; PME Familienservice GmbH, Berlin; Technische Universität Ilmenau; YOUSE GmbH, Berlin
The objective of the project Adaptable Ambient LIving ASsistant (ALIAS) is the product development of a mobile robot system that interacts with elderly users, provides cognitive assistance in daily life, and promotes social inclusion by creating connections to people and events in the wider world. ALIAS is embodied by a mobile robot platform with the capacity to monitor, interact with and access information from on-line services, without manipulation capabilities. The function of ALIAS is to keep the user linked to the wide society and in this way to improve her/his quality of life by combating loneliness and increasing cognitively stimulating activities.


CustomPacker: Highly Customizable and Flexible Packaging Station for mid- to upper sized Electronic Consumer Goods using Industrial Robots
EU FP7 - ICT
Duration: 01.07.2010 - 30.06.2013
Partners: FerRobotics Compliant Robot Technology GmbH, Austria; Loewe AG, Kronach; MRK-Systeme GmbH, Augsburg; PROFACTOR GmbH, Steyr-Gleink, Austria; Tekniker, Eibar, Gipuzkoa, Spain; VTT, Finland
The project Highly Customizable and Flexible Packaging Station for mid- to upper sized Electronic Consumer Goods using Industrial Robots (CustomPacker) aims at developing and integrating a scalable and flexible packaging assistant that aids human workers while packaging mid- to upper sized and mostly heavy goods. Productivity will be increased by exploiting direct human-robot cooperation overcoming the need and requirements of current safety regularities. The main goal of CustomPacker is to design and assemble a packaging workstation mostly using standard hardware components resulting in a universal handling system for different products.


PROMETHEUS: Prediction and inteRpretatiOn of huMan bEhaviour based on probabilisTic structures and HeterogEneoUs sensorS
EU SEVENTH FRAMEWORK PROGRAMME THEME ICT-1.2.1
Duration: 01.01.2008 - 31.12.2010
The overall goal of the project is the development of principled methods to link fundamental sensing tasks using multiple modalities, and automated cognition regarding the understanding of human behaviour in complex indoor environments, at both individual and collective levels. Given the two above principles, the consortium will conduct research on three core scientific and technological objectives:
- sensor modelling and information fusion from multiple, heterogeneous perceptual modalities,
- modelling, localization, and tracking of multiple people,
- modelling, recognition, and short-term prediction of continuous complex human behavior.


SEMAINE: Sustained Emotionally coloured Machine-humane Interaction using Nonverbal Expression
EU FP7 STREP, Duration: 01.01.2008 - 31.12.2010
Partner: DfKI, Queens University Belfast (QUB), Imperial College of Science, Technology and Medicine London, University Twente, University Paris VIII, CNRS-ENST.
The Semaine project aims to build a Sensitive Artificial Listener (SAL), a multimodal dialogue system which can: interact with humans with a virtual character, sustain an interaction with a user for some time, react appropriately to the user's non-verbal behaviour. In the end, this SAL-system will be released to a large extent as an open source research tool to the community.


AMIDA, Augmented Multi-party Interaction - Distance Access
EU-IST-Programm


AMI, Augmented Multi-party Interaction
EU-IST-Programm

 


FGNet, European face and gesture recognition working group
EU-IST-Programm


m4, Multi-Modal Meeting Manager
EU-IST-Programm


SAFEE, Security of Aircraft in the Future European Environment
EU Sixth Framework Programme
Finished: 30.4.2008
The integrated project Security of Aircraft in the Future European Environment (SAFEE) within the Sixth Framework Programme is designed to restore full confidence in the air transport industry.

Excellence Initiative

The CoTeSys cluster of excellence (which stands for "COgnition for TEchnical SYstems") investigates cognition for technical systems such as vehicles, robots, and factories. Cognitive technical systems are equipped with artificial sensors and actuators, integrated and embedded into physical systems, and act in a physical world. They differ from other technical systems in that they perform cognitive control and have cognitive capabilities.
Projects:
RealEYE (1.08-12.09), ACIPE (11.06-12.09), JAHIR (11.06-12.10).


Graduate School of Systemic Neurosciences In the framework of the German excellence initiative (supported by Wissenschaftsrat and Deutsche Forschungsgemeinschaft, DFG) the Ludwig-Maximilians-Universität Munich (LMU) has founded a new Graduate School of Systemic Neurosciences (GSN-LMU). The school offers clearly structured training programs for PhD and MDPhD candidates. Tight links exist to the Elite Network Bavaria Master Program Neurosciences. read more read more

Industrial Projects

Part of the framework program CAR@TUM between BMW Forschung und Technik GmbH, TU München and the Universität Regensburg
Runtime: 1.4.2008 – 31.3.2010
Current display elements in cars range from the central instrument cluster and the navigation display to the Head Up Display (HUD). Those large display areas can be used for immediate, short- and long-term assistance for the driver. The introduction of the HUD imposes the question which information can be displayed on it in a sensible way. Also the possibilities of a digital instrument cluster or alternative display technologies like LED stripes are being investigated. Together with data provided by Car2X communication an assistance system shall be developed to support anticipatory driving.


TCVC ITalking Car and Virtual Companion
Cooperation with Continental Automotive GmbH
Runtime: 01.06.2008 - 30.11.2008
TCVC provides an expertise on emotion in the car with respect to a requirement analysis, potential and near-future use-cases, technology assessment and a user acceptance study.

ICRI IIn-Car Real Internet
Cooperation with Continental Automotive GmbH
Runtime: 01.06.2008 - 30.11.2008
ICRI aims at benchmarking of internet browsers on embedded platforms as well as at development of an integrated multimodal demonstrator for internet in the car. Investigated modalities contain hand-writing and touch-gestures and natural speech apart from conventional GUI interaction. The focus lies on MMI development with an embedded realisation.

cUSER2 IcUSER 2
Cooperation with Toyota
Runtime: 08.01.2007 - 31.07.2007
The aim of the cUSER follow-up project is to establish a system to interpret human interest by combined speech and facial expression analysis basing on multiple input analyses. Besides the aim of improved highest possible accuracy by subject adaptation, class balancing strategies, and fully automatic segmentation by individual audio and video stream analysis, cUSER 2 focuses on real-time application by cost-sensitive feature space optimization, graphics processing power utilization, and high-performance programming methods. Furthermore, feasibility and real recognition scenarios will be evaluated.

cUSER IcUSER
Cooperation with Toyota
Runtime: 01.08.2005 - 30.09.2006
The aim of this project was an audiovisual approach to the recognition of spontaneous human interest.

NaDia INaDia Natural Dialogs for komplex in-vehicle information services
Cooperation with the BMW Group and CLT Sprachtechnologie GmbH.
Runtime: 1.9.2001 - 31.5.2003 and 1.4.2004 - 31.9.2005

IFERMUS, Error robust, multimodal speech dialogs
Cooperation with the BMW AG, the DaimlerChrysler AG and SiemensVDO Automotive.
Duration: 01.03.2000-30.06.2003
The primary intention of the FERMUS-project was to localize and evaluate various strategies for a dedicated analysis of potential error patterns during human-machine interaction with information and communication systems in upper-class cars. For reaching this goal, we have employed a huge bundle of progressive and mainly recognition-based input modalities, like interfaces for natural speech and dynamic gestural input. Particularly, emotional patterns of the driver have beeen integrated for generating context-adequate dialog structures.

 


ADVIA IADVIA, Adaptive In-car Dialogs
Cooperation with the BMW Group.
Runtime: 01.07.1998-31.12.2000

SOMMIA ISOMMIA, Speech-Oriented Man-Machine Interface in the Automobile
Cooperation with SiemensVDO Automotive.
Runtime: 01.06.2000-30.09.2001
The SOMMIA project focused on the design and evaluation of an ergonomic and generic operation concept for a speech-based MMI integrated in a car MP3-player or in comparable automotive applications. In addition, the system was subject to several economic and geometrical boundary conditions: a two line, 16 characters display with a small set of LEDs and a speaker- independent full word recognizer with 30 to 50 words active vocabulary. Nevertheless, the interface had to meet high technical requirements: its handling should be easy to learn, comfortable and, above all, intuitive and interactively explorable.

Other Projects

AGMA IAGMA, Automatic Generation of Audio Visual Metadata in the MPEG-7 Framework (BMBF, Subcontractor of FHG-IMK)

MUGSHOT IMUGSHOT, Face profile recognition using hybrid pattern recognition techniques (DFG)

Robust Speech Recognition

IRobust analysis, recognition and interpretation of speech based on a single-stage stochastic decoder (DFG)
Duration: 01.04.2001-31.05.2004