IBM TRECVID'08 high-level feature detection
Apostol Natsev, Wei Jiang, et al.
TRECVID 2008
For large-vocabulary handwriting-recognition applications, such as note-taking, word-level language modeling is of key importance, to constrain the recognizer's search and to contribute to the scoring of hypothesized texts. We discuss the creation of a word-unigram language model, which associates probabilities with individual words. Typically, such models are derived from a large, diverse text corpus. We describe a three-stage algorithm for determining a word unigram from such a corpus. First is tokenization, the segmenting of a corpus into words. Second, we select for the model a subset of the set of distinct words found during tokenization. Complexities of these stages are discussed. Finally, we create recognizer-specific data structures for the word set and unigram. Applying our method to a 600-million-word corpus, we generate a 50,000-word model which eliminates 45% of word-recognition errors made by a baseline system employing only a character-level language model. © 2001 IEEE.
Apostol Natsev, Wei Jiang, et al.
TRECVID 2008
Srideepika Jayaraman, Chandra Reddy, et al.
Big Data 2021
Silvio Savarese, Holly Rushmeier, et al.
Proceedings of the IEEE International Conference on Computer Vision
James E. Gentile, Nalini Ratha, et al.
BTAS 2009