by Thamme Gowda in Note tags: NMT

Sequence-to-sequence transduction, e.g. neural machine translation (NMT), is a general problem. This task involves transformation of a sequence of symbols to another sequence of symbols, where both input and output can have varying lengths.

Challenges

  • Sequential information: Order of input and output symbols are important.

  • Variable length sequences with long term dependencies: Sequences can be extremely long. Symbols may contains dependencies across the sequence. E.g. Consider the text in a book, and dependencies across chapters.

  • Unbounded vocabulary, e.g. Vocabulary in natural languages.

  • Imbalanced distribution: Some symbols may appear frequently while other may appear rarely. E.g. The distribution of types in natural languages.

Generalization

Machine learning can be viewed as approximation of a functions that map input to output. We can categorize the problems into following types based on how many vectors are in input and output domains.

  1. One → One: Image classification

  2. Many → One: Text classification, Video classification

  3. N → N: Sequence tagging such as: POS tagging, named entity recognition

  4. One → Many: Image Captioning

  5. Many → Many: Machine translation, Text summarization, Video Captioning

A, B, C, and D are special cases of E. this implies,

  1. Forward view: While setting up a curriculum for training/educating students, we shall follow the increasing order of problem complexity: A → B → C → D and at last E.

  2. Backward view: The skills acquired in solving E can be easily adapted to solve other problem types with slight modification, often constraining the number of vectors on either input, output, or both.