Skip to content

Latest commit

History

History
101 lines (63 loc) 路 5.29 KB

owlvit.mdx

File metadata and controls

101 lines (63 loc) 路 5.29 KB

OWL-ViT

Overview

The OWL-ViT (short for Vision Transformer for Open-World Localization) was proposed in Simple Open-Vocabulary Object Detection with Vision Transformers by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. OWL-ViT is an open-vocabulary object detection network trained on a variety of (image, text) pairs. It can be used to query an image with one or multiple text queries to search for and detect target objects described in text.

The abstract from the paper is the following:

Combining simple architectures with large-scale pre-training has led to massive improvements in image classification. For object detection, pre-training and scaling approaches are less well established, especially in the long-tailed and open-vocabulary setting, where training data is relatively scarce. In this paper, we propose a strong recipe for transferring image-text models to open-vocabulary object detection. We use a standard Vision Transformer architecture with minimal modifications, contrastive image-text pre-training, and end-to-end detection fine-tuning. Our analysis of the scaling properties of this setup shows that increasing image-level pre-training and model size yield consistent improvements on the downstream detection task. We provide the adaptation strategies and regularizations needed to attain very strong performance on zero-shot text-conditioned and one-shot image-conditioned object detection. Code and models are available on GitHub.

Usage

OWL-ViT is a zero-shot text-conditioned object detection model. OWL-ViT uses CLIP as its multi-modal backbone, with a ViT-like Transformer to get visual features and a causal language model to get the text features. To use CLIP for detection, OWL-ViT removes the final token pooling layer of the vision model and attaches a lightweight classification and box head to each transformer output token. Open-vocabulary classification is enabled by replacing the fixed classification layer weights with the class-name embeddings obtained from the text model. The authors first train CLIP from scratch and fine-tune it end-to-end with the classification and box heads on standard detection datasets using a bipartite matching loss. One or multiple text queries per image can be used to perform zero-shot text-conditioned object detection.

[OwlViTFeatureExtractor] can be used to resize (or rescale) and normalize images for the model and [CLIPTokenizer] is used to encode the text. [OwlViTProcessor] wraps [OwlViTFeatureExtractor] and [CLIPTokenizer] into a single instance to both encode the text and prepare the images. The following example shows how to perform object detection using [OwlViTProcessor] and [OwlViTForObjectDetection].

>>> import requests
>>> from PIL import Image
>>> import torch

>>> from transformers import OwlViTProcessor, OwlViTForObjectDetection

>>> processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32")
>>> model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32")

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> inputs = processor(text=[["a photo of a cat", "a photo of a dog"]], images=image, return_tensors="pt")

>>> outputs = model(**inputs)
>>> logits = outputs["logits"]  # Prediction logits of shape [batch_size, num_patches, num_max_text_queries]
>>> boxes = outputs["pred_boxes"]  # Object box boundaries of shape [batch_size, num_patches, 4]

>>> batch_size = boxes.shape[0]
>>> for i in range(batch_size):  # Loop over sets of images and text queries
...     boxes = outputs["pred_boxes"][i]
...     logits = torch.max(outputs["logits"][i], dim=-1)
...     scores = torch.sigmoid(logits.values)
...     labels = logits.indices

This model was contributed by adirik. The original code can be found here.

OwlViTConfig

[[autodoc]] OwlViTConfig - from_text_vision_configs

OwlViTTextConfig

[[autodoc]] OwlViTTextConfig

OwlViTVisionConfig

[[autodoc]] OwlViTVisionConfig

OwlViTFeatureExtractor

[[autodoc]] OwlViTFeatureExtractor - call

OwlViTProcessor

[[autodoc]] OwlViTProcessor

OwlViTModel

[[autodoc]] OwlViTModel - forward - get_text_features - get_image_features

OwlViTTextModel

[[autodoc]] OwlViTTextModel - forward

OwlViTVisionModel

[[autodoc]] OwlViTVisionModel - forward

OwlViTForObjectDetection

[[autodoc]] OwlViTForObjectDetection - forward