Skip to content

Commit

Permalink
Merge branch 'master' of github.com:huggingface/datasets into drop-py…
Browse files Browse the repository at this point in the history
…thon36
  • Loading branch information
mariosasko committed Jun 13, 2022
2 parents 8a82d72 + 5eac250 commit 3dbe753
Show file tree
Hide file tree
Showing 55 changed files with 3,321 additions and 472 deletions.
230 changes: 230 additions & 0 deletions datasets/bigbench/README.md
@@ -0,0 +1,230 @@
---
annotations_creators:
- crowdsourced
- expert-generated
- machine-generated
language_creators:
- crowdsourced
- expert-generated
- machine-generated
- other
languages:
- en
licenses:
- apache-2.0
multilinguality:
- multilingual
- monolingual
pretty_name: bigbench
size_categories:
- unknown
source_datasets:
- original
task_categories:
- multiple-choice
- question-answering
- text-classification
- text-generation
- zero-shot-classification
- other
task_ids:
- multiple-choice-qa
- extractive-qa
- open-domain-qa
- closed-domain-qa
- fact-checking
- acceptability-classification
- intent-classification
- multi-class-classification
- multi-label-classification
- text-scoring
- hate-speech-detection
- language-modeling
---

# Dataset Card for BIG-bench

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)

## Dataset Description

- **Homepage/Repository:** [https://github.com/google/BIG-bench](https://github.com/google/BIG-bench)
- **Paper:** In progress
- **Leaderboard:**
- **Point of Contact:** [bigbench@googlegroups.com](mailto:bigbench@googlegroups.com)


### Dataset Summary

The Beyond the Imitation Game Benchmark (BIG-bench) is a collaborative benchmark intended to probe large language models and extrapolate their future capabilities. Tasks included in BIG-bench are summarized by keyword [here](https://github.com/google/BIG-bench/blob/main/bigbench/benchmark_tasks/keywords_to_tasks.md), and by task name [here](https://github.com/google/BIG-bench/blob/main/bigbench/benchmark_tasks/README.md). A paper introducing the benchmark, including evaluation results on large language models, is currently in preparation.

### Supported Tasks and Leaderboards

BIG-Bench consists of both json and programmatic tasks.
This implementation in HuggingFace datasets implements

- 24 BIG-bench Lite tasks

- 167 BIG-bench json tasks (includes BIG-bench Lite)

To study the remaining programmatic tasks, please see the [BIG-bench GitHub repo](https://github.com/google/BIG-bench)

### Languages

Although predominantly English, BIG-bench contains tasks in over 1000 written languages, as well as some synthetic and programming languages.
See [BIG-bench organized by keywords](https://github.com/google/BIG-bench/blob/main/bigbench/benchmark_tasks/keywords_to_tasks.md). Relevant keywords include `multilingual`, `non-english`, `low-resource-language`, `translation`.

For tasks specifically targeting low-resource languages, see the table below:

Task Name | Languages |
--|--|
Conlang Translation Problems | English, German, Finnish, Abma, Apinayé, Inapuri, Ndebele, Palauan|
Kannada Riddles | Kannada|
Language Identification | 1000 languages |
Swahili English Proverbs | Swahili |
Which Wiki Edit | English, Russian, Spanish, German, French, Turkish, Japanese, Vietnamese, Chinese, Arabic, Norwegian, Tagalog|




## Dataset Structure

### Data Instances

Each dataset contains 5 features. For example an instance from the `emoji_movie` task is:

```
{
"idx": 0,
"inputs": "Q: What movie does this emoji describe? 👦👓⚡️\n choice: harry potter\n. choice: shutter island\n. choice: inglourious basterds\n. choice: die hard\n. choice: moonlight\nA:"
"targets": ["harry potter"],
"multiple_choice_targets":["harry potter", "shutter island", "die hard", "inglourious basterds", "moonlight"],
"multiple_choice_scores": [1, 0, 0, 0, 0]
}
```
For tasks that do not have multiple choice targets, the lists are empty.


### Data Fields

Every example has the following fields
- `idx`: an `int` feature
- `inputs`: a `string` feature
- `targets`: a sequence of `string` feature
- `multiple_choice_targets`: a sequence of `string` features
- `multiple_choice_scores`: a sequence of `int` features

### Data Splits

Each task has a `default`, `train` and `validation` split.
The split `default` uses all the samples for each task (and it's the same as `all` used in the `bigbench.bbseqio` implementation.)
For standard evaluation on BIG-bench, we recommend using the `default` split, and the `train` and `validation` split is to be used if one wants to train a model on BIG-bench.

## Dataset Creation

BIG-bench tasks were collaboratively submitted through GitHub pull requests.

Each task went through a review and meta-review process with criteria outlined in the [BIG-bench repository documentation](https://github.com/google/BIG-bench/blob/main/docs/doc.md#submission-review-process).
Each task was required to describe the data source and curation methods on the task README page.

### Curation Rationale

[More Information Needed]

### Source Data

#### Initial Data Collection and Normalization

[More Information Needed]

#### Who are the source language producers?

[More Information Needed]


### Annotations

#### Annotation process

[More Information Needed]

#### Who are the annotators?

[More Information Needed]


### Personal and Sensitive Information

[More Information Needed]


## Considerations for Using the Data

BIG-bench contains a wide range of tasks, some of which are sensitive and should be used with care.

Some tasks are specifically designed to test biases and failures common to large language models, and so may elicit inappropriate or harmful responses.
For a more thorough discussion see the [BIG-bench paper](in progress).

To view tasks designed to probe pro-social behavior, including alignment, social, racial, gender, religious or political bias; toxicity; inclusion; and other issues please see tasks under the [pro-social behavior keywords](https://github.com/google/BIG-bench/blob/main/bigbench/benchmark_tasks/keywords_to_tasks.md#pro-social-behavior) on the BIG-bench repository.


### Social Impact of Dataset

[More Information Needed]


### Discussion of Biases

[More Information Needed]


### Other Known Limitations

[More Information Needed]


## Additional Information

For a more thorough discussion of all aspects of BIG-bench including dataset creation and evaluations see the BIG-bench repository [https://github.com/google/BIG-bench](https://github.com/google/BIG-bench) and paper []

### Dataset Curators

[More Information Needed]


### Licensing Information

[Apache License 2.0](https://github.com/google/BIG-bench/blob/main/LICENSE)

### Citation Information

To be added soon !

### Contributions
For a full list of contributors to the BIG-bench dataset, see the paper.

Thanks to [@andersjohanandreassen](https://github.com/andersjohanandreassen) and [@ethansdyer](https://github.com/ethansdyer) for adding this dataset to HuggingFace.

1 comment on commit 3dbe753

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Show benchmarks

PyArrow==6.0.0

Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.007520 / 0.011353 (-0.003833) 0.003730 / 0.011008 (-0.007279) 0.026374 / 0.038508 (-0.012134) 0.032434 / 0.023109 (0.009324) 0.270436 / 0.275898 (-0.005462) 0.296964 / 0.323480 (-0.026516) 0.005646 / 0.007986 (-0.002339) 0.003237 / 0.004328 (-0.001092) 0.006298 / 0.004250 (0.002048) 0.043723 / 0.037052 (0.006671) 0.263154 / 0.258489 (0.004665) 0.298251 / 0.293841 (0.004410) 0.028264 / 0.128546 (-0.100282) 0.008553 / 0.075646 (-0.067094) 0.226997 / 0.419271 (-0.192275) 0.047024 / 0.043533 (0.003491) 0.266767 / 0.255139 (0.011628) 0.293728 / 0.283200 (0.010528) 0.096354 / 0.141683 (-0.045328) 1.297733 / 1.452155 (-0.154422) 1.364847 / 1.492716 (-0.127870)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.221531 / 0.018006 (0.203525) 0.443024 / 0.000490 (0.442534) 0.011719 / 0.000200 (0.011519) 0.000280 / 0.000054 (0.000225)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.023343 / 0.037411 (-0.014068) 0.090691 / 0.014526 (0.076165) 0.103733 / 0.176557 (-0.072824) 0.144769 / 0.737135 (-0.592367) 0.107739 / 0.296338 (-0.188600)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.351727 / 0.215209 (0.136518) 3.517526 / 2.077655 (1.439871) 1.589656 / 1.504120 (0.085536) 1.423664 / 1.541195 (-0.117530) 1.492518 / 1.468490 (0.024028) 0.374578 / 4.584777 (-4.210199) 4.512013 / 3.745712 (0.766300) 0.914522 / 5.269862 (-4.355339) 0.923210 / 4.565676 (-3.642467) 0.051672 / 0.424275 (-0.372603) 0.010644 / 0.007607 (0.003036) 0.500357 / 0.226044 (0.274312) 4.989735 / 2.268929 (2.720806) 2.260122 / 55.444624 (-53.184502) 1.905501 / 6.876477 (-4.970975) 2.051547 / 2.142072 (-0.090525) 0.533442 / 4.805227 (-4.271785) 0.117576 / 6.500664 (-6.383088) 0.059897 / 0.075469 (-0.015572)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.295259 / 1.841788 (-0.546529) 13.037869 / 8.074308 (4.963561) 24.100748 / 10.191392 (13.909356) 0.767792 / 0.680424 (0.087368) 0.532354 / 0.534201 (-0.001847) 0.384649 / 0.579283 (-0.194634) 0.417095 / 0.434364 (-0.017269) 0.275835 / 0.540337 (-0.264502) 0.284879 / 1.386936 (-1.102057)
PyArrow==latest
Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.006040 / 0.011353 (-0.005313) 0.004096 / 0.011008 (-0.006912) 0.027451 / 0.038508 (-0.011057) 0.032660 / 0.023109 (0.009550) 0.318764 / 0.275898 (0.042866) 0.353960 / 0.323480 (0.030480) 0.003860 / 0.007986 (-0.004126) 0.003391 / 0.004328 (-0.000938) 0.004802 / 0.004250 (0.000551) 0.037658 / 0.037052 (0.000606) 0.314221 / 0.258489 (0.055732) 0.364237 / 0.293841 (0.070396) 0.029625 / 0.128546 (-0.098922) 0.009525 / 0.075646 (-0.066121) 0.253785 / 0.419271 (-0.165486) 0.053264 / 0.043533 (0.009731) 0.312280 / 0.255139 (0.057141) 0.344184 / 0.283200 (0.060985) 0.098049 / 0.141683 (-0.043634) 1.453468 / 1.452155 (0.001314) 1.518023 / 1.492716 (0.025307)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.232264 / 0.018006 (0.214257) 0.447031 / 0.000490 (0.446542) 0.031405 / 0.000200 (0.031205) 0.000261 / 0.000054 (0.000206)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.024791 / 0.037411 (-0.012621) 0.100238 / 0.014526 (0.085712) 0.120155 / 0.176557 (-0.056401) 0.160741 / 0.737135 (-0.576394) 0.120755 / 0.296338 (-0.175584)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.369250 / 0.215209 (0.154041) 3.679075 / 2.077655 (1.601420) 1.713804 / 1.504120 (0.209684) 1.581751 / 1.541195 (0.040556) 1.586166 / 1.468490 (0.117676) 0.382590 / 4.584777 (-4.202187) 3.649153 / 3.745712 (-0.096559) 0.847268 / 5.269862 (-4.422593) 0.855986 / 4.565676 (-3.709691) 0.047710 / 0.424275 (-0.376565) 0.010029 / 0.007607 (0.002422) 0.453142 / 0.226044 (0.227098) 4.515006 / 2.268929 (2.246077) 2.150463 / 55.444624 (-53.294162) 1.834089 / 6.876477 (-5.042387) 1.977727 / 2.142072 (-0.164345) 0.476130 / 4.805227 (-4.329097) 0.108826 / 6.500664 (-6.391838) 0.056159 / 0.075469 (-0.019310)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.303849 / 1.841788 (-0.537938) 12.567305 / 8.074308 (4.492997) 25.182322 / 10.191392 (14.990930) 0.878541 / 0.680424 (0.198117) 0.550583 / 0.534201 (0.016382) 0.346693 / 0.579283 (-0.232590) 0.404375 / 0.434364 (-0.029989) 0.286373 / 0.540337 (-0.253964) 0.259174 / 1.386936 (-1.127762)

CML watermark

Please sign in to comment.