Fallback Variable History NNLMs: Efficient NNLMs by precomputation and stochastic training.
Fallback Variable History NNLMs: Efficient NNLMs by precomputation and stochastic training.
Blog Article
This paper presents a new method to reduce the computational cost when using Neural Networks as Language Models, during recognition, in some particular scenarios.It is based on a Neural Network that considers input contexts of different length in order to ease the use of a fallback mechanism together with the precomputation of softmax normalization constants Acidophilus/Probiotics for these inputs.The proposed approach is empirically Fineliner validated, showing their capability to emulate lower order N-grams with a single Neural Network.A machine translation task shows that the proposed model constitutes a good solution to the normalization cost of the output softmax layer of Neural Networks, for some practical cases, without a significant impact in performance while improving the system speed.
Report this page