Skip to content

Latest commit

History

History
64 lines (38 loc) 路 3.79 KB

switch_transformers.mdx

File metadata and controls

64 lines (38 loc) 路 3.79 KB

SwitchTransformers

Overview

The SwitchTransformers model was proposed in Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity by William Fedus, Barret Zoph, Noam Shazeer.

The Switch Transformer model uses a sparse T5 encoder-decoder architure, where the MLP are replace by a Mixture of Expert (MOE). A routing mecanism (top 1 in this case) associates each token to one of the expert, where each expert is a dense MLP. While switch tranformers have a lot more weights than their equivalent dense models, the sparsity allows for better scaling. During a forward pass, only a fraction of the weights are used. The routing mecanism allows the model to select relavant weights on the fly which increases the model capacity. #TODO add the intuition about moving the loss curve.

The abstract from the paper is the following:

In deep learning, models typically reuse the same parameters for all inputs. Mixture of Experts (MoE) defies this and instead selects different parameters for each incoming example. The result is a sparsely-activated model -- with outrageous numbers of parameters -- but a constant computational cost. However, despite several notable successes of MoE, widespread adoption has been hindered by complexity, communication costs and training instability -- we address these with the Switch Transformer. We simplify the MoE routing algorithm and design intuitive improved models with reduced communication and computational costs. Our proposed training techniques help wrangle the instabilities and we show large sparse models may be trained, for the first time, with lower precision (bfloat16) formats. We design models based off T5-Base and T5-Large to obtain up to 7x increases in pre-training speed with the same computational resources. These improvements extend into multilingual settings where we measure gains over the mT5-Base version across all 101 languages. Finally, we advance the current scale of language models by pre-training up to trillion parameter models on the "Colossal Clean Crawled Corpus" and achieve a 4x speedup over the T5-XXL model.

Tips:

  • SwitchTransformers uses the T5Tokenizer, which can be loaded directly from each model's repository.
  • The released weights are pretrained on English Masked Language Modeling task, and should be finetuned.

This model was contributed by Younes Belkada and Arthur Zucker . The original code can be found here.

SwitchTransformersConfig

[[autodoc]] SwitchTransformersConfig

SwitchTransformersTop1Router

[[autodoc]] SwitchTransformersTop1Router - _compute_router_probabilities - forward

SwitchTransformersSparseMLP

[[autodoc]] SwitchTransformersSparseMLP - forward

SwitchTransformersModel

[[autodoc]] SwitchTransformersModel - forward

SwitchTransformersForConditionalGeneration

[[autodoc]] SwitchTransformersForConditionalGeneration - forward

SwitchTransformersEncoderModel

[[autodoc]] SwitchTransformersEncoderModel - forward