Skip to content

Latest commit

History

History
47 lines (28 loc) 路 2.9 KB

swinv2.mdx

File metadata and controls

47 lines (28 loc) 路 2.9 KB

Swin Transformer V2

Overview

The Swin Transformer V2 model was proposed in Swin Transformer V2: Scaling Up Capacity and Resolution by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.

The abstract from the paper is the following:

Large-scale NLP models have been shown to significantly improve the performance on language tasks with no signs of saturation. They also demonstrate amazing few-shot capabilities like that of human beings. This paper aims to explore large-scale models in computer vision. We tackle three major issues in training and application of large vision models, including training instability, resolution gaps between pre-training and fine-tuning, and hunger on labelled data. Three main techniques are proposed: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) A log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) A self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images. Through these techniques, this paper successfully trained a 3 billion-parameter Swin Transformer V2 model, which is the largest dense vision model to date, and makes it capable of training with images of up to 1,536脳1,536 resolution. It set new performance records on 4 representative vision tasks, including ImageNet-V2 image classification, COCO object detection, ADE20K semantic segmentation, and Kinetics-400 video action classification. Also note our training is much more efficient than that in Google's billion-level visual models, which consumes 40 times less labelled data and 40 times less training time.

Tips:

  • One can use the [AutoFeatureExtractor] API to prepare images for the model.

This model was contributed by nandwalritik. The original code can be found here.

Swinv2Config

[[autodoc]] Swinv2Config

Swinv2Model

[[autodoc]] Swinv2Model - forward

Swinv2ForMaskedImageModeling

[[autodoc]] Swinv2ForMaskedImageModeling - forward

Swinv2ForImageClassification

[[autodoc]] transformers.Swinv2ForImageClassification - forward