Skip to content

Latest commit

History

History
47 lines (27 loc) 路 2.38 KB

opt.mdx

File metadata and controls

47 lines (27 loc) 路 2.38 KB

OPT

Overview

The OPT model was proposed in Open Pre-trained Transformer Language Models by Meta AI. OPT is a series of open-sourced large causal language models which perform similar in performance to GPT3.

The abstract from the paper is the following:

Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models.

Tips:

  • OPT has the same architecture as [BartDecoder].
  • Contrary to GPT2, OPT adds the EOS token </s> to the beginning of every prompt. Note: Make sure to pass use_fast=False when loading OPT's tokenizer with [AutoTokenizer] to get the correct tokenizer.

This model was contributed by Arthur Zucker, Younes Belkada, and Patrick Von Platen. The original code can be found here.

OPTConfig

[[autodoc]] OPTConfig

OPTModel

[[autodoc]] OPTModel - forward

OPTForCausalLM

[[autodoc]] OPTForCausalLM - forward