You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I created a 1600 * 1600 * 200 np.array and stored it as a hdf5 file. I wish to extract 2D slices from the 3D dataset, and this is what I used to read the dataset
f = h5py.File('3D.h5', 'r')
data=f['3D']
data=data[:,:,155]
This takes about 1.4s which is about the same as the time taken to read the entire 3D array (1.9s). I wish to know if there is a way to speed up process, if I only want to read data slices.
Here is the summary of the environment I used
h5py 3.9.0
HDF5 1.12.2
Python 3.10.9 | packaged by Anaconda, Inc. | (main, Mar 1 2023, 18:18:15) [MSC v.1916 64 bit (AMD64)]
operating system window10
numpy 1.24.4
cython (built with) 0.29.35
numpy (built against) 1.21.6
HDF5 (built against) 1.12.2
The text was updated successfully, but these errors were encountered:
I created a 1600 * 1600 * 200 np.array and stored it as a hdf5 file. I wish to extract 2D slices from the 3D dataset, and this is what I used to read the dataset
f = h5py.File('3D.h5', 'r') data=f['3D'] data=data[:,:,155]
This takes about 1.4s which is about the same as the time taken to read the entire 3D array (1.9s). I wish to know if there is a way to speed up process, if I only want to read data slices.
You do not mention how you created the HDF5 file. Is the dataset chunked and (if so) how?
Vasole is on the right track. Assuming you compressed the data the shape of the compressed chunks will greatly determine the speed of reading different slices.
I'm gonna guess that reading the slice in OP, is for some reason decompressing the same amount of data as the entire dataset and as such you save very little time.
I created a 1600 * 1600 * 200 np.array and stored it as a hdf5 file. I wish to extract 2D slices from the 3D dataset, and this is what I used to read the dataset
f = h5py.File('3D.h5', 'r')
data=f['3D']
data=data[:,:,155]
This takes about 1.4s which is about the same as the time taken to read the entire 3D array (1.9s). I wish to know if there is a way to speed up process, if I only want to read data slices.
Here is the summary of the environment I used
h5py 3.9.0
HDF5 1.12.2
Python 3.10.9 | packaged by Anaconda, Inc. | (main, Mar 1 2023, 18:18:15) [MSC v.1916 64 bit (AMD64)]
operating system window10
numpy 1.24.4
cython (built with) 0.29.35
numpy (built against) 1.21.6
HDF5 (built against) 1.12.2
The text was updated successfully, but these errors were encountered: