Automatic task slots assignment in Hadoop MapReduce
Kun Wang, Juwei Shi, et al.
PACT 2011
For large-vocabulary handwriting-recognition applications, such as note-taking, word-level language modeling is of key importance, to constrain the recognizer's search and to contribute to the scoring of hypothesized texts. We discuss the creation of a word-unigram language model, which associates probabilities with individual words. Typically, such models are derived from a large, diverse text corpus. We describe a three-stage algorithm for determining a word unigram from such a corpus. First is tokenization, the segmenting of a corpus into words. Second, we select for the model a subset of the set of distinct words found during tokenization. Complexities of these stages are discussed. Finally, we create recognizer-specific data structures for the word set and unigram. Applying our method to a 600-million-word corpus, we generate a 50,000-word model which eliminates 45% of word-recognition errors made by a baseline system employing only a character-level language model. © 2001 IEEE.
Kun Wang, Juwei Shi, et al.
PACT 2011
Xiaohui Shen, Gang Hua, et al.
FG 2011
Milind R. Naphade, Sankar Basu, et al.
ICPR 2008
Guo-Jun Qi, Charu Aggarwal, et al.
IEEE TPAMI