-
Notifications
You must be signed in to change notification settings - Fork 25.1k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* First pass * Make conversion script work * Improve conversion script * Fix bug, conversion script working * Improve conversion script, implement BEiTFeatureExtractor * Make conversion script work based on URL * Improve conversion script * Add tests, add documentation * Fix bug in conversion script * Fix another bug * Add support for converting masked image modeling model * Add support for converting masked image modeling * Fix bug * Add print statement for debugging * Fix another bug * Make conversion script finally work for masked image modeling models * Move id2label for datasets to JSON files on the hub * Make sure id's are read in as integers * Add integration tests * Make style & quality * Fix test, add BEiT to README * Apply suggestions from @sgugger's review * Apply suggestions from code review * Make quality * Replace nielsr by microsoft in tests, add docs * Rename BEiT to Beit * Minor fix * Fix docs of BeitForMaskedImageModeling Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
- Loading branch information
1 parent
0dd1152
commit 83e5a10
Showing
26 changed files
with
2,388 additions
and
1,170 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,97 @@ | ||
.. | ||
Copyright 2021 The HuggingFace Team. All rights reserved. | ||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with | ||
the License. You may obtain a copy of the License at | ||
|
||
http://www.apache.org/licenses/LICENSE-2.0 | ||
|
||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | ||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | ||
specific language governing permissions and limitations under the License. | ||
|
||
BEiT | ||
----------------------------------------------------------------------------------------------------------------------- | ||
|
||
Overview | ||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | ||
|
||
The BEiT model was proposed in `BEiT: BERT Pre-Training of Image Transformers <https://arxiv.org/abs/2106.08254>`__ by | ||
Hangbo Bao, Li Dong and Furu Wei. Inspired by BERT, BEiT is the first paper that makes self-supervised pre-training of | ||
Vision Transformers (ViTs) outperform supervised pre-training. Rather than pre-training the model to predict the class | ||
of an image (as done in the `original ViT paper <https://arxiv.org/abs/2010.11929>`__), BEiT models are pre-trained to | ||
predict visual tokens from the codebook of OpenAI's `DALL-E model <https://arxiv.org/abs/2102.12092>`__ given masked | ||
patches. | ||
|
||
The abstract from the paper is the following: | ||
|
||
*We introduce a self-supervised vision representation model BEiT, which stands for Bidirectional Encoder representation | ||
from Image Transformers. Following BERT developed in the natural language processing area, we propose a masked image | ||
modeling task to pretrain vision Transformers. Specifically, each image has two views in our pre-training, i.e, image | ||
patches (such as 16x16 pixels), and visual tokens (i.e., discrete tokens). We first "tokenize" the original image into | ||
visual tokens. Then we randomly mask some image patches and fed them into the backbone Transformer. The pre-training | ||
objective is to recover the original visual tokens based on the corrupted image patches. After pre-training BEiT, we | ||
directly fine-tune the model parameters on downstream tasks by appending task layers upon the pretrained encoder. | ||
Experimental results on image classification and semantic segmentation show that our model achieves competitive results | ||
with previous pre-training methods. For example, base-size BEiT achieves 83.2% top-1 accuracy on ImageNet-1K, | ||
significantly outperforming from-scratch DeiT training (81.8%) with the same setup. Moreover, large-size BEiT obtains | ||
86.3% only using ImageNet-1K, even outperforming ViT-L with supervised pre-training on ImageNet-22K (85.2%).* | ||
|
||
Tips: | ||
|
||
- BEiT models are regular Vision Transformers, but pre-trained in a self-supervised way rather than supervised. They | ||
outperform both the original model (ViT) as well as Data-efficient Image Transformers (DeiT) when fine-tuned on | ||
ImageNet-1K and CIFAR-100. | ||
- As the BEiT models expect each image to be of the same size (resolution), one can use | ||
:class:`~transformers.BeitFeatureExtractor` to resize (or rescale) and normalize images for the model. | ||
- Both the patch resolution and image resolution used during pre-training or fine-tuning are reflected in the name of | ||
each checkpoint. For example, :obj:`microsoft/beit-base-patch16-224` refers to a base-sized architecture with patch | ||
resolution of 16x16 and fine-tuning resolution of 224x224. All checkpoints can be found on the `hub | ||
<https://huggingface.co/models?search=microsoft/beit>`__. | ||
- The available checkpoints are either (1) pre-trained on `ImageNet-22k <http://www.image-net.org/>`__ (a collection of | ||
14 million images and 22k classes) only, (2) also fine-tuned on ImageNet-22k or (3) also fine-tuned on `ImageNet-1k | ||
<http://www.image-net.org/challenges/LSVRC/2012/>`__ (also referred to as ILSVRC 2012, a collection of 1.3 million | ||
images and 1,000 classes). | ||
- BEiT uses relative position embeddings, inspired by the T5 model. During pre-training, the authors shared the | ||
relative position bias among the several self-attention layers. During fine-tuning, each layer's relative position | ||
bias is initialized with the shared relative position bias obtained after pre-training. Note that, if one wants to | ||
pre-train a model from scratch, one needs to either set the :obj:`use_relative_position_bias` or the | ||
:obj:`use_relative_position_bias` attribute of :class:`~transformers.BeitConfig` to :obj:`True` in order to add | ||
position embeddings. | ||
|
||
This model was contributed by `nielsr <https://huggingface.co/nielsr>`__. The original code can be found `here | ||
<https://github.com/microsoft/unilm/tree/master/beit>`__. | ||
|
||
BeitConfig | ||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | ||
|
||
.. autoclass:: transformers.BeitConfig | ||
:members: | ||
|
||
|
||
BeitFeatureExtractor | ||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | ||
|
||
.. autoclass:: transformers.BeitFeatureExtractor | ||
:members: __call__ | ||
|
||
|
||
BeitModel | ||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | ||
|
||
.. autoclass:: transformers.BeitModel | ||
:members: forward | ||
|
||
|
||
BeitForMaskedImageModeling | ||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | ||
|
||
.. autoclass:: transformers.BeitForMaskedImageModeling | ||
:members: forward | ||
|
||
|
||
BeitForImageClassification | ||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | ||
|
||
.. autoclass:: transformers.BeitForImageClassification | ||
:members: forward |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -21,6 +21,7 @@ | |
auto, | ||
bart, | ||
barthez, | ||
beit, | ||
bert, | ||
bert_generation, | ||
bert_japanese, | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,59 @@ | ||
# flake8: noqa | ||
# There's no way to ignore "F401 '...' imported but unused" warnings in this | ||
# module, but to preserve other warnings. So, don't check this module at all. | ||
|
||
# Copyright 2021 The HuggingFace Team. All rights reserved. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
from typing import TYPE_CHECKING | ||
|
||
from ...file_utils import _LazyModule, is_torch_available, is_vision_available | ||
|
||
|
||
_import_structure = { | ||
"configuration_beit": ["BEIT_PRETRAINED_CONFIG_ARCHIVE_MAP", "BeitConfig"], | ||
} | ||
|
||
if is_vision_available(): | ||
_import_structure["feature_extraction_beit"] = ["BeitFeatureExtractor"] | ||
|
||
if is_torch_available(): | ||
_import_structure["modeling_beit"] = [ | ||
"BEIT_PRETRAINED_MODEL_ARCHIVE_LIST", | ||
"BeitForImageClassification", | ||
"BeitForMaskedImageModeling", | ||
"BeitModel", | ||
"BeitPreTrainedModel", | ||
] | ||
|
||
if TYPE_CHECKING: | ||
from .configuration_beit import BEIT_PRETRAINED_CONFIG_ARCHIVE_MAP, BeitConfig | ||
|
||
if is_vision_available(): | ||
from .feature_extraction_beit import BeitFeatureExtractor | ||
|
||
if is_torch_available(): | ||
from .modeling_beit import ( | ||
BEIT_PRETRAINED_MODEL_ARCHIVE_LIST, | ||
BeitForImageClassification, | ||
BeitForMaskedImageModeling, | ||
BeitModel, | ||
BeitPreTrainedModel, | ||
) | ||
|
||
|
||
else: | ||
import sys | ||
|
||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure) |
Oops, something went wrong.