Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upcoming major improvements #209

Open
4 of 11 tasks
martindurant opened this issue Aug 8, 2022 · 19 comments
Open
4 of 11 tasks

Upcoming major improvements #209

martindurant opened this issue Aug 8, 2022 · 19 comments

Comments

@martindurant
Copy link
Member

martindurant commented Aug 8, 2022

Stuff that would be cool to get done and is well within out capacity. Please vote if you have favourites here!

@nabobalis
Copy link

I would vote for "coords generation" especially with FITS.

@rsignell-usgs
Copy link
Collaborator

rsignell-usgs commented Aug 18, 2022

  1. "coords generation" for GeoTIFF
  2. handling netcdf files with different scale_factor and add_offset (not on original list, but...)
  3. parquet

@emfdavid
Copy link
Contributor

emfdavid commented Sep 9, 2022

automagically run combine in dask tree reduction mode for me please!

@joshmoore
Copy link

"parquet storage (from preffs)" this sounds nifty, but I'll add for what it's worth, I did discuss with @jakirkham what it would look like to use zarr for the storage of kerchunk itself 😄

@martindurant
Copy link
Member Author

martindurant commented Nov 10, 2022

I would certainly love your ideas, and the thought had certainly occurred to me.

In favour of parquet:

  • the data is essentially tabular, no benefit from chunking on higher dimensions
  • most of data data (key and embedded chunks or metadata) are str/bytes. OTOH, the keys (required for every entry) could be fixed-string
  • partitioning into unequal-sized pieces, allowing, for instance, all of the references of one variable to live together and only be loaded at need; the column min/max values in the metadata also help with this
  • in preffs, each key may appear multiple times, to indicate concatenation of subchunks. Parquet could maybe also achieve this with variable-length lists of references. I'm unconvinced that this is a good idea, but zarr doesn't have the capability.

In favour of zarr:

  • it is already a requirement. Parquet also brings in pandas as a requirement.
  • the parquet references would be stored in memory as a dataframe (right?), which has significantly slower indexing compared to dicts from JSON. Raw numpy arrays from zarr might be more efficient. It is worth noting that keys ought to be ASCII

@emfdavid
Copy link
Contributor

Moving to parquet or zarr sounds like a great idea.
I am having some success with HRRR. I will try to share results next week.

@rsignell-usgs
Copy link
Collaborator

rsignell-usgs commented Dec 14, 2022

@emfdavid can you give us an update here? I'm hitting memory issues trying to generate/use kerchunk on the NWM 1km gridded CONUS dataset from https://registry.opendata.aws/nwm-archive/. Creating/loading the consolidated JSON for just 10 years of this 40 year dataset takes 16GB of RAM.

@martindurant
Copy link
Member Author

@rsignell-usgs , are you using tree reduction? Since there is a lot of redundancy between the individual files, that should need less peak memory.

@rsignell-usgs
Copy link
Collaborator

rsignell-usgs commented Dec 14, 2022

@martindurant , yes, there are 100,000+ individual JSONs that cover the 40 year period. I use 40 workers that each consolidate a single year. Access to the individual single year JSON (which takes 1.5GB memory) is shown here: https://nbviewer.org/gist/26f42b8556bf4cab5df81fc924342d5d

I don't have enough memory on the ESIP qhub to combine the 40 JSONs into a single JSON. :(

@martindurant
Copy link
Member Author

You might still be able to tree further: try combining in batches of 5 or 8, and then combining those?

@martindurant
Copy link
Member Author

After creating the filesystem for one year, I see 1.2GB in use. I'll look into it.

I am indeed working on the parquet backend, which should give better memory footprint per reference set; but strings are still strings, so all those paths add up once in memory unless templates are only applied at access time. Hm.

However, it may be possible, instead, to make the combine process not need to load all the reference sets up front.

@emfdavid
Copy link
Contributor

emfdavid commented Dec 14, 2022 via email

@rsignell-usgs
Copy link
Collaborator

Thanks for the update @emfdavid. Are those HRRR JSONs in a publically-accessible bucket? (perhaps requester-pays?) Have an example notebook?

@rsignell-usgs
Copy link
Collaborator

rsignell-usgs commented Dec 15, 2022

@martindurant I was able to create four 10 year combined JSONs from the 40 individual yearly JSON files.

The process to create each of these 10-year files took 16GB of the 32GB memory for the Xlarge instance at https://jupyer.qhub.esipfed.org.

I was unable to create the 40 year combined file from these four 10 year files though -- it blew the 32GB memory

@martindurant
Copy link
Member Author

Try #272

@martindurant
Copy link
Member Author

With latest commits in 272, I could combine 13 years directly with peak memory around 13GB.

@rsignell-usgs
Copy link
Collaborator

Just to make sure I've got the right version, I have this. You?

09:53 $ conda list kerchunk
# packages in environment at /home/conda/users/envs/pangeo:
#
# Name                    Version                   Build  Channel
kerchunk                  0.0.1+420.gca577c4.dirty          pypi_0    pypi

@martindurant
Copy link
Member Author

yes

@maxrjones
Copy link
Contributor

FYI the Pangeo ML augmentation with support for some of these tasks through the NASA ACCESS 2019 program is on FigShare.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants