Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ronin Archival Snapshot #428

Open
shaswatsaloni opened this issue Mar 28, 2024 · 18 comments
Open

Ronin Archival Snapshot #428

shaswatsaloni opened this issue Mar 28, 2024 · 18 comments

Comments

@shaswatsaloni
Copy link

Hi All,
Can anyone provide me the link to download an archival snapshot for mainnet starting from Block 1.

Thanks,
Saloni

@qui-pham
Copy link
Contributor

Certainly, please find the link below to download the archival snapshot for the mainnet starting from Block 1:

https://github.com/axieinfinity/ronin-snapshot?tab=readme-ov-file#chaindata-snapshot---archive-node

If you encounter any issues or require further assistance, please do not hesitate to let us know.

@shaswatsaloni
Copy link
Author

okay got it, but do we need to download all the files for it to get the data from block 0?

@minh-bq
Copy link
Contributor

minh-bq commented Apr 3, 2024

yes, you need to download all the files.

@shaswatsaloni
Copy link
Author

Hi, @minh-bq , i downloaded the prerequisite that is zstd, provided 8tb of disk size and tried to download only the first file.

Command used:
image

But running into some error after it downloaded around 117GB of files. Attaching the screenshot of the error here:

image

Could you please help me with this?

Thanks,
Saloni.

@minh-bq
Copy link
Contributor

minh-bq commented Apr 4, 2024

According to the guide in README, I believe that you must download all the files and concatenate them before decompressing.

@shaswatsaloni
Copy link
Author

ok, @minh-bq i tried only one command at a time which is:

wget -O chaindata.tar.zst "https://ss.roninchain.com/archive-mainnet-chaindata-20240306.tar.zst-000"

but still after 18% completion, got this error.

image

Could you please help me with this?

Thanks,
Saloni.

@minh-bq
Copy link
Contributor

minh-bq commented Apr 4, 2024

This might be because of unusual network error between your local and server. Normally, wget will disconnect or have a limit times to retry.
You can try to add these flag to your command --retry-connrefused (retry) and --continue (to keep download from the last byte), --tries=0 (for infinite retry, default is 20 times). For example:

wget --retry-connrefused --continue --tries=0 -O chaindata.tar.zst "https://ss.roninchain.com/archive-mainnet-chaindata-20240306.tar.zst-000"

Reference: https://linux.die.net/man/1/wget

@shaswatsaloni
Copy link
Author

Hi, @minh-bq , I tried the above command. But it is stuck at 3% from around 4-5 hours.

Attaching screenshot:
image

And one more quick question: I am using ubuntu 18.04, will it be enough or we need ubuntu 20.04 or 22.04?

Thanks,
Saloni.

@shaswatsaloni
Copy link
Author

Checking on this again? Can anyone help me with this?

@minh-bq
Copy link
Contributor

minh-bq commented Apr 9, 2024

Hi, can you retry (Ctrl+C then run command again)? It should start from where it is left off not from 0%.

@tudoanm
Copy link

tudoanm commented Apr 9, 2024

First of all, I have thought that you used a mistaken section of the docs for this download chaindata snapshot.
image
☝️ Above is the correct part, which at the end of README. The script is for downloading 13 different files. It also means that, the -O chaindata.tar.zst in your command is not correct. It should be -O chaindata.tar.zst-00X (X is the index of each file). Cause you will need to combine those files later.

Secondly, I think that the scripts is just an example of usage. So don't forget to custom the wget command in script to get a retry as minh-bq has mentioned, --retry-connrefused --continue --tries=0 . Despite any error from networks (like in your case), you can cancel the download any time and re-enter the command, it will successfully continue to download.
Hope this help!

@shaswatsaloni
Copy link
Author

Hi, thanks for this, one more thing.
Do i need to put all files to download at once or can i try with only one file to see if that works for me?

@minh-bq
Copy link
Contributor

minh-bq commented Apr 9, 2024

Currently, you have to download all files, concatenate them then decompress. We'll try to improve that in the future.

@tudoanm
Copy link

tudoanm commented Apr 9, 2024

If i am not wrong, you are asking whether to download them all at once or one by one ? If so, yes, you could download one by one, but notice the file name to not overwrite the downloaded file by the new one.
However, when decompressing, all of the files must be concatenated at the same time.

@shaswatsaloni
Copy link
Author

Hi, I am able to download all the tar files, now working on untar. Thats why still this ticket is opened. Once that is done. I will close this.

Thanks,
Saloni

@shaswatsaloni
Copy link
Author

Hi @minh-bq , i downloaded the tar files that was completed. But while doing the untar it with the command:

tar -I zstd -xvf chaindata.tar.zst

The size that is there is around 7TB. It ran for 24 hours, but in the end it says something like this:

image

What does this mean? Or should i proceed to run the node on top of this?

Thanks.
Saloni

@minh-bq
Copy link
Contributor

minh-bq commented Apr 17, 2024

With this error, sadly, your downloaded file might be corrupted. @qui-pham can you give the hashes of split files so @shaswatsaloni can find out which split file is corrupted?

@qui-pham
Copy link
Contributor

You can ignore this error and start the archive node. The error is caused by tar and split files - we'll have new updates in the new version.
If you have errors when starting node, please ping for us to help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants