Harmoniurn models for semantic video representation and classification
Abstract
Accurate and efficient video classification demands the fusion of niultiniodal information and the use of intermediate representations. Combining the two ideas into the one framework, we propose a probabilistic approach for video classification using intermediate semantic representations derived from multi-modal features. Based on a class of bipartite undirected graphical models named harmonium, our approach represents the video data as latent semantic topics derived by jointly modeling the transcript keywords and color-histogram features, and performs classification using these latent topics under a unified framework. We show satisfactory classification performance of our approach on a benchmark dataset as well as interesting insights into the data.