New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support dtype_backend="pandas|pyarrow"
configuration
#9719
Merged
jrbourbeau
merged 11 commits into
dask:main
from
jrbourbeau:pyarrow-use-nullable-dtypes
Dec 16, 2022
Merged
Changes from all commits
Commits
Show all changes
11 commits
Select commit
Hold shift + click to select a range
9b69961
Support use_nullable_dtypes="pandas|pyarrow"
jrbourbeau f79594e
Cleanup
jrbourbeau 472fbd4
Test fixup
jrbourbeau fb3a25f
Skip test is pyarrow dtypes not available
jrbourbeau 2ae283b
Docstring
jrbourbeau bf30884
Use config option
jrbourbeau 911f36b
Add test that demonstrates increased spark interoperability
jrbourbeau ed95fd5
Merge branch 'main' of https://github.com/dask/dask into pyarrow-use-…
jrbourbeau 0498e5c
Skip test when arrow dtypes are available
jrbourbeau e822b30
Merge branch 'main' of https://github.com/dask/dask into pyarrow-use-…
jrbourbeau dd80bb8
Rename to dtype_backend
jrbourbeau File filter
Filter by extension
Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One additional benefit of adding support for pyarrow dtypes is that we actually end up getting better Spark interoperability. For example, I ran into a user group offline who were using Spark with decimal type data. When they tried to read in the corresponding Spark-written Parquet dataset, Dask would end up converting them to
object
. With this PR we can now usedask.config.set({"dataframe.nullable_backend": "pyarrow"})
to read that data in backed by pyarrow'sdecimal128
type.Anyways, that's the context around this test