Skip to content

Commit

Permalink
Docs for creating an audio dataset (#4872)
Browse files Browse the repository at this point in the history
* 馃摑 add docs for creating audio dataset

* 馃枍 small edits, encourage TAR archives more

* 馃枍 apply polina feedbacks

* audiofolder and metadata first

* oops metadata first also in audio load

* replace vivos with librivox indonesia, describe streaming in more detail

* taking over the PR

* check if i can push to other's fork don't look at this

* git back vivos as main example, simplify instructions. add librivox-indonesia as an advanced example

* Apply some suggestions from code review

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update docs/source/audio_dataset_repo.mdx

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* fix something i don't remember what, integrate changes from #4925

* integrate #4952 to image docs too

* rename audio and image datasets guides consistently (to audio/image_dataset.mdx)

* remove outdated doc

* fix audio guide name

* fix link + minor changes

Co-authored-by: Quentin Lhoest <lhoest.q@gmail.com>
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>
Co-authored-by: polinaeterna <polina@huggingface.co>
  • Loading branch information
4 people committed Sep 21, 2022
1 parent 1a9385d commit 733e499
Show file tree
Hide file tree
Showing 6 changed files with 702 additions and 96 deletions.
4 changes: 3 additions & 1 deletion docs/source/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -50,13 +50,15 @@
title: Load audio data
- local: audio_process
title: Process audio data
- local: audio_dataset
title: Create an audio dataset
title: "Audio"
- sections:
- local: image_load
title: Load image data
- local: image_process
title: Process image data
- local: image_dataset_script
- local: image_dataset
title: Create an image dataset
- local: image_classification
title: Image classification
Expand Down
43 changes: 42 additions & 1 deletion docs/source/about_dataset_features.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -56,11 +56,52 @@ See the [flatten](./process#flatten) section to learn how you can extract the ne
The array feature type is useful for creating arrays of various sizes. You can create arrays with two dimensions using [`Array2D`], and even arrays with five dimensions using [`Array5D`].

```py
>>> features = Features({'a': Array2D(shape=(1, 3), dtype='int32'))
>>> features = Features({'a': Array2D(shape=(1, 3), dtype='int32')})
```

The array type also allows the first dimension of the array to be dynamic. This is useful for handling sequences with variable lengths such as sentences, without having to pad or truncate the input to a uniform shape.

```py
>>> features = Features({'a': Array3D(shape=(None, 5, 2), dtype='int32')})
```

# The Audio type

Audio datasets have a column with type [`Audio`], which contains three important fields:

* `array`: the decoded audio data represented as a 1-dimensional array.
* `path`: the path to the downloaded audio file.
* `sampling_rate`: the sampling rate of the audio data.

When you load an audio dataset and call the audio column, the [`Audio`] feature automatically decodes and resamples the audio file:

```py
>>> from datasets import load_dataset, Audio

>>> dataset = load_dataset("PolyAI/minds14", "en-US", split="train")
>>> dataset[0]["audio"]
{'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414,
0. , 0. ], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav',
'sampling_rate': 8000}
```

<Tip warning={true}>

Index into an audio dataset using the row index first and then the `audio` column - `dataset[0]["audio"]` - to avoid decoding and resampling all the audio files in the dataset. Otherwise, this can be a slow and time-consuming process if you have a large dataset.

</Tip>

With `decode=False`, the [`Audio`] type simply gives you the path or the bytes of the audio file, without decoding it into an `array`,

```py
>>> dataset = load_dataset("PolyAI/minds14", "en-US", split="train").cast_column("audio", Audio(decode=False))
>>> dataset[0]
{'audio': {'bytes': None,
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav'},
'english_transcription': 'I would like to set up a joint account with my partner',
'intent_class': 11,
'lang_id': 4,
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav',
'transcription': 'I would like to set up a joint account with my partner'}
```

1 comment on commit 733e499

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Show benchmarks

PyArrow==6.0.0

Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.009165 / 0.011353 (-0.002188) 0.004511 / 0.011008 (-0.006497) 0.036221 / 0.038508 (-0.002288) 0.034869 / 0.023109 (0.011760) 0.343120 / 0.275898 (0.067222) 0.410416 / 0.323480 (0.086936) 0.006640 / 0.007986 (-0.001346) 0.004909 / 0.004328 (0.000581) 0.008009 / 0.004250 (0.003759) 0.047017 / 0.037052 (0.009965) 0.361861 / 0.258489 (0.103372) 0.404449 / 0.293841 (0.110608) 0.047359 / 0.128546 (-0.081187) 0.014197 / 0.075646 (-0.061449) 0.299544 / 0.419271 (-0.119728) 0.062052 / 0.043533 (0.018519) 0.355524 / 0.255139 (0.100385) 0.379561 / 0.283200 (0.096361) 0.098338 / 0.141683 (-0.043345) 1.696131 / 1.452155 (0.243976) 1.733224 / 1.492716 (0.240507)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.209167 / 0.018006 (0.191161) 0.563650 / 0.000490 (0.563160) 0.006747 / 0.000200 (0.006548) 0.000160 / 0.000054 (0.000106)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.024043 / 0.037411 (-0.013368) 0.106127 / 0.014526 (0.091601) 0.123623 / 0.176557 (-0.052933) 0.166276 / 0.737135 (-0.570860) 0.126739 / 0.296338 (-0.169600)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.558288 / 0.215209 (0.343079) 5.520811 / 2.077655 (3.443157) 2.242870 / 1.504120 (0.738750) 1.923707 / 1.541195 (0.382512) 1.913369 / 1.468490 (0.444879) 0.668051 / 4.584777 (-3.916726) 5.272895 / 3.745712 (1.527182) 2.904988 / 5.269862 (-2.364874) 1.976256 / 4.565676 (-2.589420) 0.081108 / 0.424275 (-0.343167) 0.012171 / 0.007607 (0.004564) 0.795476 / 0.226044 (0.569432) 7.420858 / 2.268929 (5.151929) 3.000064 / 55.444624 (-52.444561) 2.259907 / 6.876477 (-4.616570) 2.355267 / 2.142072 (0.213194) 0.872368 / 4.805227 (-3.932859) 0.179898 / 6.500664 (-6.320766) 0.070550 / 0.075469 (-0.004919)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.767761 / 1.841788 (-0.074027) 14.933903 / 8.074308 (6.859595) 40.318999 / 10.191392 (30.127607) 1.073106 / 0.680424 (0.392682) 0.664730 / 0.534201 (0.130529) 0.477203 / 0.579283 (-0.102080) 0.604046 / 0.434364 (0.169683) 0.349848 / 0.540337 (-0.190489) 0.337261 / 1.386936 (-1.049675)
PyArrow==latest
Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.007478 / 0.011353 (-0.003874) 0.005398 / 0.011008 (-0.005610) 0.034572 / 0.038508 (-0.003936) 0.033385 / 0.023109 (0.010276) 0.424974 / 0.275898 (0.149076) 0.482507 / 0.323480 (0.159027) 0.004624 / 0.007986 (-0.003361) 0.004891 / 0.004328 (0.000562) 0.005577 / 0.004250 (0.001326) 0.043070 / 0.037052 (0.006018) 0.446066 / 0.258489 (0.187577) 0.472306 / 0.293841 (0.178465) 0.045730 / 0.128546 (-0.082816) 0.014877 / 0.075646 (-0.060770) 0.308060 / 0.419271 (-0.111211) 0.065206 / 0.043533 (0.021674) 0.438390 / 0.255139 (0.183251) 0.448270 / 0.283200 (0.165070) 0.108827 / 0.141683 (-0.032856) 1.701955 / 1.452155 (0.249801) 1.761061 / 1.492716 (0.268345)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.237579 / 0.018006 (0.219573) 0.509700 / 0.000490 (0.509210) 0.008698 / 0.000200 (0.008498) 0.000129 / 0.000054 (0.000074)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.023802 / 0.037411 (-0.013609) 0.108458 / 0.014526 (0.093932) 0.132759 / 0.176557 (-0.043797) 0.179598 / 0.737135 (-0.557537) 0.120853 / 0.296338 (-0.175485)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.623162 / 0.215209 (0.407953) 6.071960 / 2.077655 (3.994305) 2.701881 / 1.504120 (1.197761) 2.287688 / 1.541195 (0.746493) 2.279207 / 1.468490 (0.810717) 0.733268 / 4.584777 (-3.851509) 5.387241 / 3.745712 (1.641529) 4.721474 / 5.269862 (-0.548388) 2.498129 / 4.565676 (-2.067547) 0.087918 / 0.424275 (-0.336357) 0.012334 / 0.007607 (0.004727) 0.763604 / 0.226044 (0.537560) 7.691957 / 2.268929 (5.423029) 3.289354 / 55.444624 (-52.155271) 2.671586 / 6.876477 (-4.204891) 2.734503 / 2.142072 (0.592431) 0.943164 / 4.805227 (-3.862063) 0.187797 / 6.500664 (-6.312867) 0.072653 / 0.075469 (-0.002816)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.889286 / 1.841788 (0.047499) 15.110361 / 8.074308 (7.036053) 40.618418 / 10.191392 (30.427026) 1.169711 / 0.680424 (0.489287) 0.739534 / 0.534201 (0.205334) 0.462158 / 0.579283 (-0.117125) 0.583115 / 0.434364 (0.148751) 0.331663 / 0.540337 (-0.208675) 0.359010 / 1.386936 (-1.027926)

CML watermark

Please sign in to comment.