Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPNS/IPFS gateway is down #495

Closed
MysticRyuujin opened this issue Jul 31, 2021 · 28 comments
Closed

IPNS/IPFS gateway is down #495

MysticRyuujin opened this issue Jul 31, 2021 · 28 comments
Assignees
Projects

Comments

@MysticRyuujin
Copy link
Contributor

MysticRyuujin commented Jul 31, 2021

Can't load the ipns hash of 51qzi5uqu5dll0ocge71eudqnrgnogmbr37gsgl12uubsinphjoknl6bbi41p from any IPFS gateway currently, even the one on the website gateway.ipfs.io which seems weird since I've pinned the ipns to my own nodes personally with pub/sub support.

View in Huly HI-243

@DeFiYaco
Copy link
Contributor

DeFiYaco commented Aug 2, 2021

We are investigating this issue.
Thank you for letting us know!

@DeFiYaco DeFiYaco self-assigned this Aug 2, 2021
@FabijanC FabijanC added this to Backlog in Sourcify via automation Aug 31, 2021
@ligi
Copy link
Member

ligi commented Sep 6, 2021

related #514 #518

more info on what is problematic - this is what ipfs id returns:

{
	"ID": "12D3KooWCQDnipR4ovmxZGcWhJDBJPUZLgewmhb9uz9R3zKBGXNF",
	"PublicKey": "CAESICZjpXqySMlWlZL5eFda1GJcdn8ouUEnjsFlZVMuhOSA",
	"Addresses": [
		"/ip4/127.0.0.1/tcp/4001/p2p/12D3KooWCQDnipR4ovmxZGcWhJDBJPUZLgewmhb9uz9R3zKBGXNF",
		"/ip4/127.0.0.1/udp/4001/quic/p2p/12D3KooWCQDnipR4ovmxZGcWhJDBJPUZLgewmhb9uz9R3zKBGXNF",
		"/ip4/178.19.221.38/tcp/2456/p2p/12D3KooWCQDnipR4ovmxZGcWhJDBJPUZLgewmhb9uz9R3zKBGXNF",
		"/ip4/178.19.221.38/udp/5561/quic/p2p/12D3KooWCQDnipR4ovmxZGcWhJDBJPUZLgewmhb9uz9R3zKBGXNF",
		"/ip6/::1/tcp/4001/p2p/12D3KooWCQDnipR4ovmxZGcWhJDBJPUZLgewmhb9uz9R3zKBGXNF",
		"/ip6/::1/udp/4001/quic/p2p/12D3KooWCQDnipR4ovmxZGcWhJDBJPUZLgewmhb9uz9R3zKBGXNF"
	],
	"AgentVersion": "go-ipfs/0.9.0/",
	"ProtocolVersion": "ipfs/0.1.0",
	"Protocols": [
		"/ipfs/bitswap",
		"/ipfs/bitswap/1.0.0",
		"/ipfs/bitswap/1.1.0",
		"/ipfs/bitswap/1.2.0",
		"/ipfs/id/1.0.0",
		"/ipfs/id/push/1.0.0",
		"/ipfs/lan/kad/1.0.0",
		"/ipfs/ping/1.0.0",
		"/libp2p/autonat/1.0.0",
		"/libp2p/circuit/relay/0.1.0",
		"/p2p/id/delta/1.0.0",
		"/x/"
	]
}

but: this fails:

ipfs swarm connect /ip4/178.19.221.38/tcp/2456/p2p/12D3KooWCQDnipR4ovmxZGcWhJDBJPUZLgewmhb9uz9R3zKBGXNF
Error: connect 12D3KooWCQDnipR4ovmxZGcWhJDBJPUZLgewmhb9uz9R3zKBGXNF failure: failed to dial 12D3KooWCQDnipR4ovmxZGcWhJDBJPUZLgewmhb9uz9R3zKBGXNF:
  * [/ip4/178.19.221.38/tcp/2456] dial tcp4 0.0.0.0:4001->178.19.221.38:2456: i/o timeout

also this fails:

ipfs dht findpeer 12D3KooWCQDnipR4ovmxZGcWhJDBJPUZLgewmhb9uz9R3zKBGXNF
Error: routing: not found

these things work on an IPFS node that I run without docker. Pretty sure this is some docker problem & I do not really want to touch docker stuff.

@wmitsuda
Copy link

wmitsuda commented Sep 6, 2021

Is this the custom docker file you use in your installation?

https://github.com/ethereum/sourcify/blob/master/services/ipfs/Dockerfile.ipfs

it seems it is not exposing any port outside the container, see the official ipfs docker image:

https://github.com/ipfs/go-ipfs/blob/master/Dockerfile#L76

@wmitsuda
Copy link

wmitsuda commented Sep 6, 2021

I think the important one is the 4001, the other ports you probably don't want to expose for security reasons.

@DeFiYaco DeFiYaco removed their assignment Sep 7, 2021
@kuzdogan
Copy link
Member

kuzdogan commented Sep 7, 2021

EXPOSE itself actually does not do anything, it is sort of a documentation which ports needs to be published when running the container (docs). It is being published when running at https://github.com/ethereum/sourcify/blob/master/environments/ipfs.yaml

@wmitsuda
Copy link

@kuzdogan BTW, have you considered activating IPNS pubsub?

https://github.com/ipfs/go-ipfs/blob/master/docs/experimental-features.md#ipns-pubsub

if I read the config correctly, it is not turned on. I have this option enabled on in my client, but it also needs support from the publisher for it to work.

Context: https://www.youtube.com/watch?v=XniIDIXU8RE

MysticRyuujin added a commit to MysticRyuujin/sourcify that referenced this issue Sep 17, 2021
I think this will help with the IPNS resolution issues? ethereum#495 as suggested by @wmitsuda
@kuzdogan
Copy link
Member

The issue we believe was with the docker container's network config as the node does not announce itself with the public ip and therefore was unaccessible. This should be addressed by 528c105 and subsequent commits. We just moved our servers physically so we couldn't config the firewall yet but hopefully the networking issue will be sorted out this week.

The pubsub should further improve resolution but the node needs to be accessible first. Thanks for the input @wmitsuda @MysticRyuujin

@wmitsuda
Copy link

Good.. I did some tests from my location trying to resolve the staging IPNS. I'm assuming the staging name is /ipns/k51qzi5uqu5dkuzo866rys9qexfvbfdwxjc20njcln808mzjrhnorgu5rh30lb which I got from this page: https://staging.sourcify.dev/

Running time ipfs resolve /ipns/k51qzi5uqu5dkuzo866rys9qexfvbfdwxjc20njcln808mzjrhnorgu5rh30lb many times:

/ipfs/QmXeSjAZAaGKfbf6EJh9XceEBPxkQsVsBnGNABbkeAgiDe
ipfs resolve   0.19s user 0.03s system 4% cpu 4.669 total
/ipfs/Qma4nDT8bAnoQNhkpWdH46i2iGEy2MP67iQr9UMjwfjhqD
ipfs resolve   0.05s user 0.02s system 169% cpu 0.040 total
/ipfs/Qma4nDT8bAnoQNhkpWdH46i2iGEy2MP67iQr9UMjwfjhqD
ipfs resolve   0.04s user 0.02s system 162% cpu 0.040 total
/ipfs/Qma4nDT8bAnoQNhkpWdH46i2iGEy2MP67iQr9UMjwfjhqD
ipfs resolve   0.05s user 0.02s system 167% cpu 0.041 total
/ipfs/Qma4nDT8bAnoQNhkpWdH46i2iGEy2MP67iQr9UMjwfjhqD
ipfs resolve   0.04s user 0.02s system 164% cpu 0.039 total
/ipfs/Qma4nDT8bAnoQNhkpWdH46i2iGEy2MP67iQr9UMjwfjhqD
ipfs resolve   0.05s user 0.02s system 173% cpu 0.041 total
/ipfs/Qma4nDT8bAnoQNhkpWdH46i2iGEy2MP67iQr9UMjwfjhqD
ipfs resolve   0.05s user 0.02s system 170% cpu 0.042 total
/ipfs/Qma4nDT8bAnoQNhkpWdH46i2iGEy2MP67iQr9UMjwfjhqD
ipfs resolve   0.05s user 0.02s system 163% cpu 0.045 total
/ipfs/Qma4nDT8bAnoQNhkpWdH46i2iGEy2MP67iQr9UMjwfjhqD
ipfs resolve   0.05s user 0.02s system 168% cpu 0.039 total
/ipfs/Qma4nDT8bAnoQNhkpWdH46i2iGEy2MP67iQr9UMjwfjhqD
ipfs resolve   0.05s user 0.02s system 161% cpu 0.044 total
/ipfs/Qma4nDT8bAnoQNhkpWdH46i2iGEy2MP67iQr9UMjwfjhqD
ipfs resolve   0.05s user 0.02s system 165% cpu 0.040 total
/ipfs/Qma4nDT8bAnoQNhkpWdH46i2iGEy2MP67iQr9UMjwfjhqD
ipfs resolve   0.05s user 0.02s system 162% cpu 0.041 total

It is curious that every first time I try to resolve after my local IPFS node restart, it resolves to QmXeSjAZAaGKfbf6EJh9XceEBPxkQsVsBnGNABbkeAgiDe very slowly, then it starts to resolve to Qma4nDT8bAnoQNhkpWdH46i2iGEy2MP67iQr9UMjwfjhqD very quickly (probably it caches it)

I have the IPNS pubsub option turned on in my receiving client, as required for it to work.

Turning off pubsub, it goes back to resolving very slowly every request:

/ipfs/QmXeSjAZAaGKfbf6EJh9XceEBPxkQsVsBnGNABbkeAgiDe
ipfs resolve   0.18s user 0.03s system 0% cpu 1:00.05 total
/ipfs/QmXeSjAZAaGKfbf6EJh9XceEBPxkQsVsBnGNABbkeAgiDe
ipfs resolve   0.18s user 0.03s system 0% cpu 31.940 total
/ipfs/QmXeSjAZAaGKfbf6EJh9XceEBPxkQsVsBnGNABbkeAgiDe
ipfs resolve   0.17s user 0.03s system 2% cpu 9.971 total
/ipfs/QmXeSjAZAaGKfbf6EJh9XceEBPxkQsVsBnGNABbkeAgiDe
ipfs resolve   0.17s user 0.03s system 0% cpu 21.560 total
/ipfs/QmXeSjAZAaGKfbf6EJh9XceEBPxkQsVsBnGNABbkeAgiDe
ipfs resolve   0.17s user 0.03s system 0% cpu 27.981 total

However it always resolves to /ipfs/QmXeSjAZAaGKfbf6EJh9XceEBPxkQsVsBnGNABbkeAgiDe, which makes me wonder if the hash I'm getting when pubsub is ON is correct (I mean, the most recent one) or the most recent one is QmXeSjAZAaGKfbf6EJh9XceEBPxkQsVsBnGNABbkeAgiDe and it is not being updated to the pubsub mechanism.

@wmitsuda
Copy link

However, getting individual files is still extremely slow, I couldn't even get a successful call. I tried it prepending the resolved hash, so this particular case (getting files) doesn't seem related to the IPNS issue.

If you allow me a suggestion: are you using the default flatfs datastore? It seems it is pretty bad for the sourcify use case (thousands of small files), have you considered using badgerds?

My experience: when @ligi shared with me the entire production snapshot I tried to do an ipfs add -r . to my local node. It just hanged at about ~20%. I couldn't even add all files locally. Have no idea how the performance would be if it worked.

Then I converted my local installation to badger datastore and it works as a charm. It took just a few minutes to add the entire repo.

reference: ipfs/kubo#4279
converter: https://github.com/ipfs/ipfs-ds-convert

@kuzdogan kuzdogan moved this from New Issues 🌱 to In Progress 🏗️ in Sourcify Sep 22, 2021
@kuzdogan
Copy link
Member

kuzdogan commented Oct 6, 2021

Updates

It seems the IPNS issue is resolved with the --enable-pubsub-experiment --enable-namesys-pubsub flags. Everyone should be able to access the repository using the IPNS gateway or by their own IPFS means with pubsub.

As examples here are some CIDs from the folder, output of ipfs ls /ipfs/QmSvShYJpssyanZMUthvYRckxDHjj3afdAVPFddq3RuaB3/4 i.e. contracts/full_match/4

QmetYHJH9miCb6Wm94jVxsGNTEt1BpH2rjmakHLpYniXaF - 0xff4CDd042C85efD31a3d37e2CbEc0fd4F87b88E4/
QmcTKW6hrUQEJPbtFtHimCgFqTpR9ReeLx4DFXsBoFQQnP - 0xff58970e1D1fABf05A905264DDfDFb17D9aA5c92/
QmRRbyNZAtNMbgDbYGirAHXcPHQ8fbYtBgxbQNFQJno65K - 0xff65eDBc401803B3B58a18272CF35c5cCe74C7a5/
QmcckYrRZPGEj7C4C8CGSsnJtS1tzVY3wxuZknUBEP21ei - 0xff68767F8d79cc667c89Cb7b43cFC4327A592009/
QmPEtZUgsoCo1h6LoPJEcW4gcn4QcLRmppGFA8U2kM2tNC - 0xff78cd43e2B7723E39b4EE2b4c192f276aa67810/
QmddcYP384fgEH2NS87xpMYdpdpt3Vf8CTAG46gpQfhBLZ - 0xff8F44774DA825Ca0b43d74Ce8cE5801a5581DAE/
QmPEtZUgsoCo1h6LoPJEcW4gcn4QcLRmppGFA8U2kM2tNC - 0xffDE19e485593acc704A4B725B59086a9d1136F6/
QmRv6jJeqWomxrgSRrwR3jTbet7WDY2iwjKtehLL7NQ2rk - 0xffF98c4251f2d3f824dD021E35575A297033432D/
QmaSnybhttcmTUptwWo6qtg9RVL2AEy2CbMQVY5ZH2DYxd - 0xffFD05866AF1611Aa6D2d4d60BCd482a4Eb82A09/

Also the current multiaddress of the staging node is: /ip4/178.19.221.38/tcp/4002/p2p/12D3KooWLpuYXrbG6XJRGKe6BekdmwdmHYwFt73qKNuApprGyqq5

Diagnosis
Tried many of the CIDs above with both ipfs.io and cloudflare gateways. Sometimes one found in a second, sometimes took ages. I tried to understand why or to recognize a pattern but couldn't find any.

Regarding the ipfs-check tool @ligi shared: with the multiaddress of the staging and different CIDs, I was nearly always getting the third check failed "Could not find the multihash in the dht" except with the top level directory CID shown with the up-to-date IPNS gateway (under Index of: /ipnfs/...). But that one is also not being fetched by the gateways faster either.

What I did
So failing to troubleshoot I just tried turning on the accelerated dht client and switch the reprovider strategy to "pinned" only, as @ligi suggested

Now it seems all CIDs are passing the ipfs-check tool test and it seems to take less to retrieve the files 🎉 If others also confirm their problems are resolved, I'd mark this closed. Maybe one thing we should keep an eye on is if IPFS now consumes too many resouces with these new settings.

Also, our node has an hourly scheduled ipfs add -r of the whole repository and an ipns update. In our case this doesn't seem to take much time, but probably because most of the files are already added. If the issue is resolved, do you think switching to BadgerFS would still further improve the performance @wmitsuda ? I see it is still experimental and couldn't be sure of the conversion tool, especially on the production directory.

@kuzdogan kuzdogan self-assigned this Oct 6, 2021
@wmitsuda
Copy link

wmitsuda commented Oct 6, 2021

@kuzdogan thanks for the feedback!

There is a trick when using ipns pubsub, according to the docs it has to enabled in BOTH client and server. So even with it enabled on Sourcify node, users have to enable it on their local node if they want to take advantage of it.

My local node has it enabled and I'm experience good timings with ipfs resolve. If I turn it off, it always takes seconds or forever.

When using public gateways, my experience is like yours, very irregular, my guess is that they probably don't use pubsub and we experience the same ugly performance as a local node without pubsub enabled.

So I think the "resolve" part is solved for now, as long as users use a local node and opt-in for pubsub, and yes, it is also marked as experimental :)

Regarding getting files, I tried your command:

time ipfs ls /ipfs/QmSvShYJpssyanZMUthvYRckxDHjj3afdAVPFddq3RuaB3/4
Error: context deadline exceeded
ipfs ls /ipfs/QmSvShYJpssyanZMUthvYRckxDHjj3afdAVPFddq3RuaB3/4  0.18s user 0.05s system 0% cpu 1:00.15 total

but right now it is not able to complete, but my guess is that the directory changed and the hash was garbage collected.

if I do a ls with the ipns name as a prefix:

time ipfs ls /ipns/k51qzi5uqu5dkuzo866rys9qexfvbfdwxjc20njcln808mzjrhnorgu5rh30lb/contracts/full_match
QmUURw87avBF4vqU8KbGNDkGUNtfAn2W1znLeDLqwK8AAD - 1/
QmZJ38Dm6mFk8GSFfScNuWrtZTd8hqxjvYuXdJb2GEAPFf - 100/
QmYpo5jDfTR77ymFEyxkBSbPpWeXbfDv5KxinEEYnf6XCQ - 137/
Qma8trkPyVutSERp7fNaZDuMo8EuhckneHWMphrkfSj8fR - 3/
QmYdVCa3z9R3QozSeHT4JMibNG2q8sC1Ks4PTjNkWmHAua - 4/
...
ipfs ls   0.23s user 0.04s system 4% cpu 5.957 total

it is able to complete.

more specifically, doing a ls in a specific contract:

time ipfs ls /ipns/k51qzi5uqu5dkuzo866rys9qexfvbfdwxjc20njcln808mzjrhnorgu5rh30lb/contracts/full_match/4/0xff8F44774DA825Ca0b43d74Ce8cE5801a5581DAE
QmNyKq5epkCDy9H8rBN6yLhdgixB4e6hgNJb4pBxLnsfdV 10224 metadata.json
Qmcehaoh4FYAfdMhEFAj8jBT25T4tGm9bMjue6cETZW5U9 -     sources/
ipfs ls   0.27s user 0.05s system 29% cpu 1.113 total

seems to return quickly (1.1 seconds)

Getting the metadata.json for this contract seems to be very quick also:

time ipfs cat /ipns/k51qzi5uqu5dkuzo866rys9qexfvbfdwxjc20njcln808mzjrhnorgu5rh30lb/contracts/full_match/4/0xff8F44774DA825Ca0b43d74Ce8cE5801a5581DAE/metadata.json
<...>
ipfs cat   0.08s user 0.04s system 182% cpu 0.065 total

Note: all those successful examples I did on my local node with ipns pubsub enabled, if I disable it, I get the same unusable performance:

ipfs cat   0.38s user 0.10s system 1% cpu 27.422 total
ipfs cat   0.22s user 0.04s system 5% cpu 4.573 total
ipfs cat   0.20s user 0.04s system 4% cpu 5.324 total

~27 seconds for the first run, 4-5 seconds after that.

Regarding badgerds, I cannot vouch for it, it just happened that was the only way I found to pin the entire repo on my machine :)

Did you apply all those changes to production already? I tested against production ipns, it seems ipfs resolve is fast as the staging site, but getting files is still very slow, can you confirm?

@kuzdogan
Copy link
Member

kuzdogan commented Oct 7, 2021

Yes the resolution seems all fine with pubsub and from what I understood in discussions, it has become the defacto standard and plain dht resolution has always been slow.

Currently the production setting has pubsub enabled but

  1. Latest accelerated dht + reprovider strategy changes aren't applied
  2. It should be behind a NAT as we didn't do a port forwarding yet for that AFAIK

So I'd say it is expected that the resolution is quick but file retrieval is either really slow or impossible. What I noticed is as there are many files/hashes shared between staging and prod, the retrieval can get quick at times. But when given a path from prod ipns it takes too long:

#staging
$ time ipfs ls /ipns/k51qzi5uqu5dkuzo866rys9qexfvbfdwxjc20njcln808mzjrhnorgu5rh30lb/contracts
QmUDuwdPQAHNMhhew6m7UwSxq2sA8bNufxfAsqMZJmiUn4 - full_match/
QmNogr5sx9tGUxU9piEgBkn11bYayMuxQ79e1acvVMMAUv - partial_match/

real	0m0,150s
user	0m0,196s
sys	0m0,041s
#prod
time ipfs ls /ipns/k51qzi5uqu5dll0ocge71eudqnrgnogmbr37gsgl12uubsinphjoknl6bbi41p/contracts
QmWLLQcJSVFLAVjK3EbPEj7sfyC9qa7ppvJ7WJqsE5Agem - full_match/
QmVCmzEZoguQjsHE45jw9Pw98V71AfErKVpddi8aNPWy2V - partial_match/

real	0m47,473s
user	0m0,241s
sys	0m0,065s

Once we apply the changes and fix the NAT, I expect the production to behave similarly to the staging i.e. the file retrievals to happen in reasonable times. I guess then we can mark this as resolved :)

Thank you very much for the very informative input. I will keep you updated here about how it goes.

@wmitsuda
Copy link

wmitsuda commented Oct 7, 2021

Looking forward to it!

@wmitsuda
Copy link

wmitsuda commented Oct 7, 2021

I guess the real validation for this issue will be if I and @MysticRyuujin can pin the entire production hash from ipns in our machines.

@kuzdogan
Copy link
Member

We pushed the changes to production and it seems to be working fine. It also passes the ipfs-check tool test.
image

Here's the id /ip4/178.19.221.38/tcp/4003/p2p/12D3KooWDs4s7c4yrR7ZGG4ZZdbjZRDHn6XE5JruqVTWXP13LZXh

Looking forward to hear good news :) 🤞

@wmitsuda
Copy link

I just tried some manual tests and it seems pretty good!

I'm now trying to pin the ipns name, it seems to be progressing, no hiccups. I'll measure the time (I hope it takes < few hours, not days!) and return back.

@wmitsuda
Copy link

~44 min, however my local repo had objects from a previous backup I pinned manually and that probably reduced the total time.

I'll rename my current repo and do a pin from scratch now and measure the total time again.

time ipfs pin add --progress -r /ipns/k51qzi5uqu5dll0ocge71eudqnrgnogmbr37gsgl12uubsinphjoknl6bbi41p
pinned QmXmcNMVeiHLo1WCAXaEhEf2ZqgFe5yyGaLSgvi2kvsX11 recursively
ipfs pin add --progress -r   1.60s user 0.52s system 0% cpu 44:59.21 total

@wmitsuda
Copy link

Pinning from scratch, badgerds: ~1 hour, ~5GB

$ ipfs init -p badgerds
...
$ time ipfs pin add --progress /ipns/k51qzi5uqu5dll0ocge71eudqnrgnogmbr37gsgl12uubsinphjoknl6bbi41p
pinned QmaoST342uWZFdgaGHk5UQuQGnqj5YkxstkFeeeo4MgwAu recursively
ipfs pin add --progress   2.20s user 0.75s system 0% cpu 1:04:32.60 total
$ du -hd1 .ipfs
  0B	.ipfs/keystore
5.0G	.ipfs/badgerds
5.0G	.ipfs

Later today I'll try to pin using the default flatfs storage. I tried it earlier, but stopped at about ~1 hour, it was slowing down my computer a little, I'll try again later today when I get back to my homeoffice and let it run non-stop, then I'll post the numbers here.

@kuzdogan
Copy link
Member

I'd say pretty good. This means the data availability problems seem to be resolved 🎉. Curious about how much difference it will take flatfs vs badgerfs, thanks a lot for investigating.

Also considering adding an external pin for redundancy, let me know if you have any takes on this #560

@wmitsuda
Copy link

Results recreating the repo and using the default ipfs configuration (flatfs):

$ ipfs init
$ time ipfs pin add --progress /ipns/k51qzi5uqu5dll0ocge71eudqnrgnogmbr37gsgl12uubsinphjoknl6bbi41p
^Ctched/Processed 93823 nodes
Error: canceled
ipfs pin add --progress   10.84s user 4.85s system 0% cpu 6:06:24.93 total

$ du -hd1 .ipfs
518M	.ipfs/blocks
  0B	.ipfs/keystore
472K	.ipfs/datastore
518M	.ipfs

I gave up yesterday after ~6 hours, and as you can see it downloaded roughly 10% of the data. Also, my computer slowed down noticeably, which made up stop the test.

It would be good if others could replicate the test so we have more datapoints (@MysticRyuujin ? 😀), but my personal conclusion is that for Sourcify-like data (volume/amount of files) the default configuration is not good and it should be documented somewhere so people willing to contribute pinning can prepare in advance.


After I stopped the test yesterday, I tried to recreate my repo using badgerds, however I noticed that it started to get stuck at different points each time:

time ipfs pin add --progress /ipns/k51qzi5uqu5dll0ocge71eudqnrgnogmbr37gsgl12uubsinphjoknl6bbi41p
^Ctched/Processed 159 nodes
Error: canceled
ipfs pin add --progress   1.31s user 0.56s system 0% cpu 33:09.58 total

I just tried it again now in hopes something went crazy yesterday due to repo creating/recreation and network peering, but I'm still getting stuck.

Not sure if it is something on my side this time or something went wrong on the Sourcify IPFS side, are you able to pin from scratch without getting stuck right now?

@wmitsuda
Copy link

Humm, is the command bellow supposed to work?

$ ipfs swarm connect /ip4/178.19.221.38/tcp/4003/p2p/12D3KooWDs4s7c4yrR7ZGG4ZZdbjZRDHn6XE5JruqVTWXP13LZXh
Error: connect 12D3KooWDs4s7c4yrR7ZGG4ZZdbjZRDHn6XE5JruqVTWXP13LZXh failure: failed to dial 12D3KooWDs4s7c4yrR7ZGG4ZZdbjZRDHn6XE5JruqVTWXP13LZXh:
  * [/ip4/178.19.221.38/tcp/4003] dial tcp4 0.0.0.0:4001->178.19.221.38:4003: i/o timeout

@wmitsuda
Copy link

It seems it is back now :)

@wmitsuda
Copy link

Results from my second trial on pin from scratch + badgerds:

$ time ipfs pin add --progress /ipns/k51qzi5uqu5dll0ocge71eudqnrgnogmbr37gsgl12uubsinphjoknl6bbi41p
pinned QmVn7fcwo4Eai19hRX6dG9jAV8piHyxcrPTuobyZjEKhMW recursively
ipfs pin add --progress   4.47s user 1.85s system 0% cpu 2:22:41.00 total

< 2:30 hours, still pretty affordable for domestic users with standard hardware.

@wmitsuda
Copy link

That's strange, this time my repo got smaller, ~3GB

$ du -hd1 .ipfs
  0B	.ipfs/keystore
3.0G	.ipfs/badgerds
3.0G	.ipfs

No idea what happened this time.

@wmitsuda
Copy link

I started experimenting with the data and got an error when ipfs get the entire root, it seems similar to ipfs/kubo#8293

Examining the raw data, it seems there are some contract files with ".." and "..." inside the filename. Not sure if this is submitter mistake or a bug in Sourcify code.

Example: k51qzi5uqu5dll0ocge71eudqnrgnogmbr37gsgl12uubsinphjoknl6bbi41p/contracts/full_match/1/0xAb30F9FE6954587BE2098AeeF2FE855c3bF77eFa/s ources/Bonny_Finance..sol

Anyway, the ipfs client should handle this properly, so reported there.

@kuzdogan
Copy link
Member

I guess we can close this issue. Feel free to reopen if anything comes up or comment on #560

Sourcify automation moved this from In Progress 🏗️ to Done 🎉 Oct 20, 2021
@wmitsuda
Copy link

it seems something is down again, I can't resolve the ipns name:

time ipfs resolve /ipns/k51qzi5uqu5dll0ocge71eudqnrgnogmbr37gsgl12uubsinphjoknl6bbi41p
Error: could not resolve name
ipfs resolve   0.23s user 0.04s system 0% cpu 1:00.05 total

The public gateway at ipfs.io also can't resolve it.

Not sure if it is just the name or the entire service is down.

@wmitsuda
Copy link

I opened another issue since this one is closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Sourcify
  
Done 🎉
Development

No branches or pull requests

5 participants