Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory usage of Dataset #281

Open
mingdongt opened this issue Mar 9, 2017 · 2 comments
Open

Memory usage of Dataset #281

mingdongt opened this issue Mar 9, 2017 · 2 comments

Comments

@mingdongt
Copy link

We have to build and load the dataset in memory every time, which seems to occupy a lot of memory if data is really huge. Is there any way of improving this?

@mingdongt mingdongt changed the title Mmeory use of Dataset Memeory use of Dataset Mar 9, 2017
@mingdongt mingdongt reopened this Mar 9, 2017
@iurisilvio iurisilvio changed the title Memeory use of Dataset Memory use of Dataset Mar 10, 2017
@iurisilvio iurisilvio changed the title Memory use of Dataset Memory usage of Dataset Mar 10, 2017
@iurisilvio
Copy link
Collaborator

No, for now tablib is not intended to large datasets.

I agree it is an issue. I'd like to support it for a subset of tablib API, mostly format conversions.

#207 is somewhat related to this issue.

@chfw
Copy link

chfw commented Aug 8, 2018

With the respect for tablib, the streaming feature is pending. For now, I would suggest giving it a go with pyexcel. Here is the tutorial on streaming data and the information on the choice of plugins.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants