Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add collective metadata functions to the low level API #2224

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

jchelly
Copy link

@jchelly jchelly commented Jan 30, 2023

I'd like to add the following functions to the low level API:

  • H5Pset_all_coll_metadata_ops()
  • H5Pget_all_coll_metadata_ops
  • H5Pset_coll_metadata_write()
  • H5Pget_coll_metadata_write()

For context: I'm using h5py on a HPC cluster to process simulation outputs stored as HDF5. The code is distributed over multiple compute nodes which have 128 cores each. In order to make use of all of the cpu cores I run python using mpi4py with one process per core. I'm using collective I/O to read the input simulation data and write out the results.

This puts quite a load on the Lustre parallel file system, and I think it's probably because every process accesses the files independently for metadata operations. I'm hoping that can be alleviated by having HDF5 do all file access in collective mode so that only a few processes per node need to access the file system.

For my use case I just need to put the whole file in collective metadata mode. To do that I've added get/set_all_coll_metadata_ops() and get/set_coll_metadata_write() methods to h5p.PropFAID. The HDF5 documentation says that H5Pset_all_coll_metadata_ops() can also be called on group, dataset, datatype, link, or attribute access property lists. Of those I think h5py only exposes link and dataset access property lists so I also added get/set_all_coll_metadata_ops() to h5p.PropLAID and h5p.PropDAID.

@codecov
Copy link

codecov bot commented Jan 30, 2023

Codecov Report

Base: 90.01% // Head: 89.30% // Decreases project coverage by -0.72% ⚠️

Coverage data is based on head (c22582f) compared to base (c6262ac).
Patch has no changes to coverable lines.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #2224      +/-   ##
==========================================
- Coverage   90.01%   89.30%   -0.72%     
==========================================
  Files          17       17              
  Lines        2394     2394              
==========================================
- Hits         2155     2138      -17     
- Misses        239      256      +17     
Impacted Files Coverage Δ
h5py/_hl/filters.py 85.20% <0.00%> (-7.66%) ⬇️
h5py/_hl/files.py 87.36% <0.00%> (-0.73%) ⬇️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@roblatham00
Copy link

Just driving by to show my support. These optimizations are critical for any non-trivial level of scaling and I hope these can be added to h5py soon

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants