Inline updates for HMMs
Abstract
Most training algorithms for HMMs assume that the whole batch of observation sequences is given ahead of time. This is particularly the case for the the standard EM algorithm. However, in many applications such as speech, the data is generated by a temporal process. Singer and Warmuth developed online updates for HMMs that process a single observation sequence in each update. In this paper we take this approach one step further and develop an inline update for training HMMs. Now the parameters are updated after processing a single symbol of the current observation sequence. The methodology for deriving the online and the new inline update is quite different from the standard EM motivation. We show experimentally on speech data that even when all observation sequences are available (batch mode), then the online update converges faster than the batch update, and the inline update converges even faster. The standard batch EM update exhibits the slowest convergence.