Technical Program

MMSP-P3: Audiovisual and Multimodal Processing

Session Type: Poster
Time: Thursday, March 29, 16:30 - 18:30
Location: Poster Area D
Session Chairs: Philip A. Chou, Microsoft Research and Takayuki Nagai, University of Electro-Communications
 
MMSP-P3.1: SEGMENTATION OF TV SHOWS INTO SCENES USING SPEAKER DIARIZATION AND SPEECH RECOGNITION
         Hervé Bredin; LIMSI/CNRS
 
MMSP-P3.2: A MULTI-MODAL HIGHLIGHT EXTRACTION SCHEME FOR SPORTS VIDEOS USING AN INFORMATION-THEORETIC EXCITABILITY MEASURE
         Taufiq Hasan; The University of Texas at Dallas
         Hynek Boril; The University of Texas at Dallas
         Abhijeet Sangwan; The University of Texas at Dallas
         John H. L. Hansen; The University of Texas at Dallas
 
MMSP-P3.3: IMPROVING AUTOMATIC DUBBING WITH SUBTITLE TIMING OPTIMISATION USING VIDEO CUT DETECTION
         Jindrich Matousek; University of West Bohemia
         Jakub Vit; University of West Bohemia
 
MMSP-P3.4: CLUSTERING AND SYNCHRONIZING MULTI-CAMERA VIDEO VIA LANDMARK CROSS-CORRELATION
         Nicholas J. Bryan; Stanford University
         Paris Smaragdis; University of Illinois at Urbana-Champaign
         Gautham J. Mysore; Adobe Systems Inc.
 
MMSP-P3.5: MULTIMODAL INFORMATION FUSION AND TEMPORAL INTEGRATION FOR VIOLENCE DETECTION IN MOVIES
         Cédric Penet; Technicolor
         Claire-Hélène Demarty; Technicolor
         Guillaume Gravier; CNRS/IRISA
         Patrick Gros; INRIA Rennes
 
MMSP-P3.6: DICTIONARY LEARNING BASED PAN-SHARPENING
         Dehong Liu; Mitsubishi Electric Research Laboratories
         Petros T. Boufounos; Mitsubishi Electric Research Laboratories
 
MMSP-P3.7: A HIERARCHICAL FRAMEWORK FOR MODELING MULTIMODALITY AND EMOTIONAL EVOLUTION IN AFFECTIVE DIALOGS
         Angeliki Metallinou; University of Southern California
         Athanasios Katsamanis; University of Southern California
         Shrikanth Narayanan; University of Southern California