Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Import export leads to browser crashes if the IndexDB data size is > 300 MB #88

Open
gyanendra2058 opened this issue Oct 1, 2021 · 7 comments

Comments

@gyanendra2058
Copy link

Also observer that dexie delete() api is also crashing if the size of data to be be deleted > ~700 MB.
Is the delete API tested for huge data sets ? Are there any bench marks for the delete api?

@gyanendra2058
Copy link
Author

gyanendra2058 commented Oct 1, 2021

I can pass the schema information and the same data till 150 MB upon request.

@dfahlander
Copy link
Collaborator

dfahlander commented Oct 1, 2021

@gyanendra2058 There had been browser-specific crash issues regarding bulk deletes earlier (IE and safari) but dexie has workaround for those. It would be marvelous to get a complete repro of the crashes. We can hopefully work around it by dividing the operations into chunks. In worst case we might have to split it into several transactions - if so, the API caller must also avoid doing the operations from within transactions. Have you tried the noTransaction option when importing?

@gyanendra2058
Copy link
Author

@dfahlander

Here is the details of the problem

  1. Attached is the source code. You can have a look at this how we are utilizing the dexie for IDB transactions.
    https://gist.github.com/gyanendra2058/5fbc0bbbdbfbf651256d835732a701a2

Line no 532 (purgeRecordsOlderThanGivenHrs method) ..is causing the issue (chrome crash Aww Snap error) while deleting the records from IDB.The size of IDB was 2GB at that time.
I verified the issue also by manually invoking the delete or clear api using Dexie via the console of main thread.

  1. The data is imported from the prod system and similar data can't be reproduce on the local.
    So I tried to export the data from the prod system IDB and import in my local so that I try playing with the issue on my local setup. But the unfortunately the dexie-export-import tool could not manage to export data beyond 200MB.
    It too crashed when it tried to export data around 1 GB or larger. So I can't provide you the whole data set.

  2. I have attached a link of data which is around 37 MB . See if you can some how copy paste similar data in this json and make this file grow to 1 GB and try to reproduce this issue.

https://www.dropbox.com/s/azv4wg7clatgspj/dexie-export%202.json?dl=0 (If you are not able to view the file please provide me your email so that I can add you).

  1. Can you have a look at the source code and provide me any clue?

Thank you in advance

@gyanendra2058
Copy link
Author

@dfahlander Sorry for being a pest ..but did you got a chance to look into the code and the IDB exported files?

@dfahlander
Copy link
Collaborator

@dfahlander Sorry for being a pest ..but did you got a chance to look into the code and the IDB exported files?

Not yet. This is open source work I do for free. If you need support on this, we have paid support information on our dexie.org contact page. That said, I regard the large data use case important and hope me or someone else will have a change to look into it eventually.

@gyanendra2058
Copy link
Author

gyanendra2058 commented Oct 6, 2021

@dfahlander I tried the bulkDelete (by passing the primary keys) API instead of delete() and it worked liked a charm. I was able to delete a table which was 3 GB in size in 1-2 secs, 2.5 GB within 0.5 sec. So far so so good. I will try this with bigger data set for 6-8 GB also and will share the results with you. With that said what's the difference between (delete vs bulkDelete) which is causing such drastic differences? Also is it better to split the huge data set and perform bulkDelete() in chunks? For eg. if my table has 2000 rows shall I do bulkDelete operation 4 times with 500 keys? Thank you

@dfahlander
Copy link
Collaborator

Collection.delete() performs bulkDeletes in chunks internally. The difference might be that bulkDelete does it in a single transaction. Maybe it hits some undocumented limit in chromium on how many operations can be done in a single transaction.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants