Neural Probabilistic Language Model Toolkit

NPLM is a toolkit for training and using feedforward neural language models (Bengio, 2003). It is fast even for large vocabularies (100k or more): a model can be trained on a billion words of data in about a week, and can be queried in about 40 μs, which is usable inside a decoder for machine translation.

NPLM is written by Ashish Vaswani, with contributions from David Chiang and Victoria Fossum. It is distributed under the MIT open-source license.