Skip to content

Commit

Permalink
fix for evaluate 0.2.2
Browse files Browse the repository at this point in the history
  • Loading branch information
lhoestq committed Oct 12, 2022
1 parent 9ec6cc7 commit dc4c764
Showing 1 changed file with 42 additions and 0 deletions.
42 changes: 42 additions & 0 deletions src/datasets/utils/metadata.py
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,48 @@ def to_yaml_string(self) -> str:
).decode("utf-8")


# DEPRECATED - just here to support old versions of evaluate like 0.2.2
known_task_ids = {
"image-classification": [],
"translation": [],
"image-segmentation": [],
"fill-mask": [],
"automatic-speech-recognition": [],
"token-classification": [],
"sentence-similarity": [],
"audio-classification": [],
"question-answering": [],
"summarization": [],
"zero-shot-classification": [],
"table-to-text": [],
"feature-extraction": [],
"other": [],
"multiple-choice": [],
"text-classification": [],
"text-to-image": [],
"text2text-generation": [],
"zero-shot-image-classification": [],
"tabular-classification": [],
"tabular-regression": [],
"image-to-image": [],
"tabular-to-text": [],
"unconditional-image-generation": [],
"text-retrieval": [],
"text-to-speech": [],
"object-detection": [],
"audio-to-audio": [],
"text-generation": [],
"conversational": [],
"table-question-answering": [],
"visual-question-answering": [],
"image-to-text": [],
"reinforcement-learning": [],
"voice-activity-detection": [],
"time-series-forecasting": [],
"document-question-answering": [],
}


if __name__ == "__main__":
from argparse import ArgumentParser

Expand Down

3 comments on commit dc4c764

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Show benchmarks

PyArrow==6.0.0

Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.009094 / 0.011353 (-0.002259) 0.005283 / 0.011008 (-0.005725) 0.097949 / 0.038508 (0.059441) 0.035339 / 0.023109 (0.012230) 0.294089 / 0.275898 (0.018191) 0.355489 / 0.323480 (0.032010) 0.007794 / 0.007986 (-0.000192) 0.004373 / 0.004328 (0.000044) 0.074851 / 0.004250 (0.070600) 0.044044 / 0.037052 (0.006992) 0.303503 / 0.258489 (0.045014) 0.341844 / 0.293841 (0.048003) 0.043095 / 0.128546 (-0.085451) 0.015175 / 0.075646 (-0.060471) 0.337357 / 0.419271 (-0.081915) 0.053081 / 0.043533 (0.009548) 0.293482 / 0.255139 (0.038343) 0.318817 / 0.283200 (0.035618) 0.103464 / 0.141683 (-0.038219) 1.479053 / 1.452155 (0.026899) 1.529336 / 1.492716 (0.036620)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.011821 / 0.018006 (-0.006185) 0.440286 / 0.000490 (0.439796) 0.003462 / 0.000200 (0.003262) 0.000093 / 0.000054 (0.000039)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.025435 / 0.037411 (-0.011976) 0.101610 / 0.014526 (0.087084) 0.115695 / 0.176557 (-0.060862) 0.166554 / 0.737135 (-0.570582) 0.118992 / 0.296338 (-0.177347)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.396233 / 0.215209 (0.181024) 3.949302 / 2.077655 (1.871647) 1.782586 / 1.504120 (0.278466) 1.597268 / 1.541195 (0.056074) 1.699842 / 1.468490 (0.231352) 0.700512 / 4.584777 (-3.884265) 3.760550 / 3.745712 (0.014838) 2.102047 / 5.269862 (-3.167814) 1.472864 / 4.565676 (-3.092812) 0.084083 / 0.424275 (-0.340192) 0.011947 / 0.007607 (0.004340) 0.501288 / 0.226044 (0.275243) 5.029593 / 2.268929 (2.760665) 2.238442 / 55.444624 (-53.206183) 1.901201 / 6.876477 (-4.975276) 2.037608 / 2.142072 (-0.104465) 0.844455 / 4.805227 (-3.960772) 0.166025 / 6.500664 (-6.334639) 0.062791 / 0.075469 (-0.012678)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.443084 / 1.841788 (-0.398704) 13.532921 / 8.074308 (5.458613) 24.956023 / 10.191392 (14.764631) 0.823113 / 0.680424 (0.142689) 0.534140 / 0.534201 (-0.000061) 0.436386 / 0.579283 (-0.142897) 0.429332 / 0.434364 (-0.005032) 0.273202 / 0.540337 (-0.267135) 0.281722 / 1.386936 (-1.105214)
PyArrow==latest
Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.007435 / 0.011353 (-0.003917) 0.005340 / 0.011008 (-0.005668) 0.097026 / 0.038508 (0.058518) 0.033670 / 0.023109 (0.010561) 0.341141 / 0.275898 (0.065243) 0.368140 / 0.323480 (0.044660) 0.005849 / 0.007986 (-0.002136) 0.004284 / 0.004328 (-0.000045) 0.072800 / 0.004250 (0.068550) 0.041133 / 0.037052 (0.004081) 0.360563 / 0.258489 (0.102074) 0.385752 / 0.293841 (0.091911) 0.038853 / 0.128546 (-0.089693) 0.012690 / 0.075646 (-0.062957) 0.342610 / 0.419271 (-0.076662) 0.063010 / 0.043533 (0.019477) 0.335302 / 0.255139 (0.080163) 0.357350 / 0.283200 (0.074150) 0.107324 / 0.141683 (-0.034359) 1.481960 / 1.452155 (0.029805) 1.563313 / 1.492716 (0.070596)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.231812 / 0.018006 (0.213806) 0.447309 / 0.000490 (0.446819) 0.003678 / 0.000200 (0.003478) 0.000086 / 0.000054 (0.000032)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.023222 / 0.037411 (-0.014190) 0.101826 / 0.014526 (0.087300) 0.115700 / 0.176557 (-0.060857) 0.153856 / 0.737135 (-0.583279) 0.119199 / 0.296338 (-0.177139)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.414515 / 0.215209 (0.199306) 4.127441 / 2.077655 (2.049787) 1.988937 / 1.504120 (0.484817) 1.802039 / 1.541195 (0.260845) 1.841504 / 1.468490 (0.373013) 0.692210 / 4.584777 (-3.892566) 3.769163 / 3.745712 (0.023451) 2.048692 / 5.269862 (-3.221169) 1.311938 / 4.565676 (-3.253738) 0.083990 / 0.424275 (-0.340285) 0.011784 / 0.007607 (0.004177) 0.518709 / 0.226044 (0.292665) 5.206502 / 2.268929 (2.937574) 2.470447 / 55.444624 (-52.974177) 2.129253 / 6.876477 (-4.747223) 2.227283 / 2.142072 (0.085211) 0.835740 / 4.805227 (-3.969487) 0.164888 / 6.500664 (-6.335777) 0.061161 / 0.075469 (-0.014308)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.513463 / 1.841788 (-0.328325) 13.579311 / 8.074308 (5.505003) 12.961088 / 10.191392 (2.769696) 0.911045 / 0.680424 (0.230621) 0.591071 / 0.534201 (0.056870) 0.419506 / 0.579283 (-0.159777) 0.415443 / 0.434364 (-0.018921) 0.255276 / 0.540337 (-0.285061) 0.260177 / 1.386936 (-1.126759)

CML watermark

@albertvillanova
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lhoestq I think it would be more aligned with best practices making this contribution through a PR... 😛

@lhoestq
Copy link
Member Author

@lhoestq lhoestq commented on dc4c764 Oct 13, 2022 via email

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please sign in to comment.