Skip to content

Releases: ipfs/kubo

Release v0.6.0-rc1

26 May 08:12
v0.6.0-rc1
Compare
Choose a tag to compare
Release v0.6.0-rc1 Pre-release
Pre-release

Tracking issue: #7366.

Release v0.5.1

09 May 03:55
v0.5.1
8431e2e
Compare
Choose a tag to compare

Hot on the heels of 0.5.0 is 0.5.1 with some important but small bug fixes. This release:

  1. Removes the 1 minute timeout for IPNS publishes (fixes #7244).
  2. Backport a DHT fix to reduce CPU usage for canceled requests.
  3. Fixes some timer leaks in the QUIC transport (lucas-clemente/quic-go#2515).

Changelog

  • github.com/ipfs/go-ipfs:
  • github.com/libp2p/go-libp2p-core (v0.5.2 -> v0.5.3):
  • github.com/libp2p/go-libp2p-kad-dht (v0.7.10 -> v0.7.11):
  • github.com/libp2p/go-libp2p-routing-helpers (v0.2.2 -> v0.2.3):
  • github.com/lucas-clemente/quic-go (v0.15.5 -> v0.15.7):
    • reset the PTO when dropping a packet number space
    • move deadlineTimer declaration out of the Read loop
    • stop the deadline timer in Stream.Read and Write
    • fix buffer use after it was released when sending an INVALID_TOKEN error
    • create the session timer at the beginning of the run loop
    • stop the timer when the session's run loop returns

Contributors

Contributor Commits Lines ± Files Changed
Marten Seemann 10 +81/-62 19
Steven Allen 5 +42/-18 10
Adin Schmahmann 1 +2/-8 1
dependabot 2 +6/-2 4

v0.5.0

28 Apr 17:10
v0.5.0
36789ea
Compare
Choose a tag to compare

We're excited to announce go-ipfs 0.5.0! This is by far the largest go-ipfs release with ~2500 commits, 98 contributors, and over 650 PRs across ipfs, libp2p, and multiformats.

Highlights

Content Routing

The primary focus of this release was on improving content routing. That is, advertising and finding content. To that end, this release heavily focuses on improving the DHT.

Improved DHT

The distributed hash table (DHT) is how IPFS nodes keep track of who has what data. The DHT implementation has been almost completely rewritten in this release. Providing, finding content, and resolving IPNS records are now all much faster. However, there are risks involved with this update due to the significant amount of changes that have gone into this feature.

The current DHT suffers from three core issues addressed in this release:

  • Most peers in the DHT cannot be dialed (e.g., due to firewalls and NATs). Much of a DHT query time is wasted trying to connect to peers that cannot be reached.
  • The DHT query logic doesn't properly terminate when it hits the end of the query and, instead, aggressively keeps on searching.
  • The routing tables are poorly maintained. This can cause search performance to slow down linearly with network size, instead of logarithmically as expected.

Reachability

We have addressed the problem of undialable nodes by having nodes wait to join the DHT as server nodes until they've confirmed that they are reachable from the public internet.

To ensure that nodes which are not publicly reachable (ex behind VPNs, offline LANs, etc.) can still coordinate and share data, go-ipfs 0.5 will run two DHTs: one for private networks and one for the public internet. Every node will participate in a LAN DHT and a public WAN DHT. See Dual DHT for more details.

Dual DHT

All IPFS nodes will now run two DHTs: one for the public internet WAN, and one for their local network LAN.

  1. When connected to the public internet, IPFS will use both DHTs for finding peers, content, and IPNS records. Nodes only publish provider and IPNS records to the WAN DHT to avoid flooding the local network.
  2. When not connected to the public internet, nodes publish provider and IPNS records to the LAN DHT.

The WAN DHT includes all peers with at least one public IP address. This release will only consider an IPv6 address public if it is in the public internet range 2000::/3.

This feature should not have any noticeable impact on go-ipfs, performance, or otherwise. Everything should continue to work in all the currently supported network configurations: VPNs, disconnected LANs, public internet, etc.

Query Logic

We've improved the DHT query logic to more closely follow Kademlia. This should significantly speed up:

  • Publishing IPNS & provider records.
  • Resolving IPNS addresses.

Previously, nodes would continue searching until they timed out or ran out of peers before stopping (putting or returning data found). Now, nodes will now stop as soon as they find the closest peers.

Routing Tables

Finally, we've addressed the poorly maintained routing tables by:

  • Reducing the likelihood that the connection manager will kill connections to peers in the routing table.
  • Keeping peers in the routing table, even if we get disconnected from them.
  • Actively and frequently querying the DHT to keep our routing table full.
  • Prioritizing useful peers that respond to queries quickly.

Testing

The DHT rewrite was made possible by Testground, our new testing framework. Testground allows us to spin up multi-thousand node tests with simulated real-world network conditions. By combining Testground and some custom analysis tools, we were able to gain confidence that the new DHT implementation behaves correctly.

Provider Record Changes

When you add content to your IPFS node, you advertise this content to the network by announcing it in the DHT. We call this providing.

However, go-ipfs has multiple ways to address the same underlying bytes. Specifically, we address content by content ID (CID) and the same underlying bytes can be addressed using (a) two different versions of CIDs (CIDv0 and CIDv1) and (b) with different codecs depending on how we're interpreting the data.

Prior to go-ipfs 0.5.0, we used the content id (CID) in the DHT when sending out provider records for content. Unfortunately, this meant that users trying to find data announced using one CID wouldn't find nodes providing the content under a different CID.

In go-ipfs 0.5.0, we're announcing data by multihash, not CID. This way, regardless of the CID version used by the peer adding the content, the peer trying to download the content should still be able to find it.

Warning: as part of the network, this could impact finding content added with CIDv1. Because go-ipfs 0.5.0 will announce and search for content using the bare multihash (equivalent to the v0 CID), go-ipfs 0.5.0 will be unable to find CIDv1 content published by nodes prior to go-ipfs 0.5.0 and vice-versa. As CIDv1 is not enabled by default so we believe this will have minimal impact. However, users are strongly encouraged to upgrade as soon as possible.

Content Transfer

A secondary focus in this release was improving content transfer, our data exchange protocols.

Refactored Bitswap

This release includes a major Bitswap refactor, running a new and backward compatible Bitswap protocol. We expect these changes to improve performance significantly.

With the refactored Bitswap, we expect:

  • Few to no duplicate blocks when fetching data from other nodes speaking the new protocol.
  • Better parallelism when fetching from multiple peers.

The new Bitswap won't magically make downloading content any faster until both seeds and leaches have updated. If you're one of the first to upgrade to 0.5.0 and try downloading from peers that haven't upgraded, you're unlikely to see much of a performance improvement.

Server-Side Graphsync Support (Experimental)

Graphsync is a new exchange protocol that operates at the IPLD Graph layer instead of the Block layer like bitswap.

For example, to download "/ipfs/QmExample/index.html":

  • Bitswap would download QmFoo, lookup "index.html" in the directory named by
    QmFoo, resolving it to a CID QmIndex. Finally, bitswap would download QmIndex.
  • Graphsync would ask peers for "/ipfs/QmFoo/index.html". Specifically, it would ask for the child named "index.html" of the object named by "QmFoo".

This saves us round-trips in exchange for some extra protocol complexity. Moreover, this protocol allows specifying more powerful queries like "give me everything under QmFoo". This can be used to quickly download a large amount of data with few round-trips.

At the moment, go-ipfs cannot use this protocol to download content from other peers. However, if enabled, go-ipfs can serve content to other peers over this protocol. This may be useful for pinning services that wish to quickly replicate client data.

To enable, run:

> ipfs config --json Experimental.GraphsyncEnabled true

Datastores

Continuing with the of improving our core data handling subsystems, both of the datastores used in go-ipfs, Badger and flatfs, have received important updates in this release:

Badger

Badger has been in go-ipfs for over a year as an experimental feature, and we're promoting it to stable (but not default). For this release, we've switched from writing to disk synchronously to explicitly syncing where appropriate, significantly increasing write throughput.

The current and default datastore used by go-ipfs is FlatFS. FlatFS essentially stores blocks of data as individual files on your file system. However, there are lots of optimizations a specialized database can do that a standard file system can not.

The benefit of Badger is that adding/fetching data to/from Badger is significantly faster than adding/fetching data to/from the default datastore, FlatFS. In some tests, adding data to Badger is 32x faster than FlatFS (in this release).

Enable Badger

In this release, we're marking the badger datastore as stable. However, we're not yet enabling it by default. You can enable it at initialization by running: ipfs init --profile=badgerds

Issues with Badger

While Badger is a great solution, there are some issues you should consider before enabling it.

Badger is complicated. FlatFS pushes all the complexity down into the filesystem itself. That means that FlatFS is only likely to lose your data if your underlying filesystem gets corrupted while there are more opportunities for Badger itself to get corrupted.

Badger can use a lot of memory. In this release, we've tuned Badger to use ~20MB of memory by default. However, it can still produce spikes as large as 1GiB of data in memory usage when garbage collecting.

Finally, Badger isn't very aggressive when it comes to garbage collection, and we're still investigating ways to get it to more aggressively clean up after itself.

We suggest you use Badger if:

  • Performance is your main requirement.
  • You rarely delete anything.
  • You have some memory to spare.

Flatfs

In the flatfs datastore, we've fixed an issue where temporary files could be left behind in some cases. While this release will avoid leaving behind temporary files, you may want to remove any left behind by previous releases:

> rm ~/.ipfs/blocks/*...
Read more

v0.5.0-rc4

25 Apr 09:21
v0.5.0-rc4
116999a
Compare
Choose a tag to compare
v0.5.0-rc4 Pre-release
Pre-release

Since RC3:

  • Reduce duplicate blocks in bitswap by increasing some timeouts and fixing the use of sessions in the ipfs pin command.
  • Fix some bugs in the ipfs dht CLI commands.
  • Ensure bitswap cancels are sent when the request is aborted.
  • Optimize some bitswap hot-spots and reduce allocations.
  • Harden use of the libp2p identify protocol to ensure we never "forget" our peer's protocols. This is important in this release because we're using this information to determine whether or not a peer is a member of the DHT.
  • Fix some edge cases where we might not notice that a peer has transitioned to/from a DHT server/client.
  • Avoid forgetting our observed external addresses when no new connections have formed in the last 10 minutes. This has been a mild issue since 2016 but was exacerbated by this release as we now push address updates to our peers when our addresses change. Unfortunately, combined, this meant we'd tell our peers to forget our external addresses (but only if we haven't formed a single new connection in the last 10 minutes).

Release v0.5.0-rc3

22 Apr 07:25
v0.5.0-rc3
8cb67ab
Compare
Choose a tag to compare
Release v0.5.0-rc3 Pre-release
Pre-release

Since RC2:

  • Many typo fixes.
  • Merged some improvements to the gateway directory listing template.
  • Performance tweaks in bitswap and the DHT.
  • More integration tests for the DHT.
  • Fixed redirects to the subdomain gateway for directory listings.
  • Merged some debugging code for QUIC.
  • Update the WebUI to pull in some bug fixes.
  • Update flatfs to fix some issues on windows in the presence of AVs.
  • Updated go version to 1.13.10.
  • Avoid adding IPv6 peers to the WAN DHT if their only "public" IP addresses aren't in the public internet IPv6 range.

Release v0.5.0-rc2

15 Apr 07:09
v0.5.0-rc2
b6dfe07
Compare
Choose a tag to compare
Release v0.5.0-rc2 Pre-release
Pre-release

Release issue: #7109

Changes between RC1 and RC2

Other than bug fixes, the following major changes were made between RC1 and RC2.

QUIC Upgrade

In RC1, we downgraded to a previous version of the (experimental) QUIC transport so we could build on go 1.13. In RC2, our QUIC transport was patched to support go 1.13 so we've upgraded back to the latest version.

NOTE: The latest version implements a different and incompatible draft (draft 27) of the QUIC protocol than the previous RC and go-ipfs 0.4.23. In practice, this shouldn't cause any issues as long as your node supports transports other than QUIC (also necessary to communicate with the vast majority of the network).

DHT "auto" mode

In this RC, the DHT will not enter "server" mode until your node determines that it is reachable from the public internet. This prevents unreachable nodes from polluting the DHT. Please read the "New DHT" section in the issue body for more info.

AutoNAT

IPFS has a protocol called AutoNAT for detecting whether or not a node is "reachable" from the public internet. In short:

  1. An AutoNAT client asks a node running an AutoNAT service if it can be reached at one of a set of guessed addresses.
  2. The AutoNAT service will attempt to "dialback" those addresses (with some restrictions, e.g., we won't dial back to a different IP address).
  3. If the AutoNAT service succeeds, it will report back the address it successfully dialed and the AutoNAT client will now know that it is reachable from the public internet.

In go-ipfs 0.5, all nodes act as AutoNAT clients to determine if they should switch into DHT server mode.

As of this RC, all nodes (except new nodes initialized with the "lowpower" config profile) will also run a rate-limited AutoNAT service by default. This should have minimal overhead but we may change the defaults in RC3 (e.g., rate limit further or only enable the AutoNAT service on DHT servers).

In addition to enabling the AutoNAT service by default, this RC changes the AutoNAT config options around:

  1. It removes the Swarm.EnableAutoNATService option.
  2. It Adds an AutoNAT config section (empty by default). This new section is documented in docs/config.md along with the rest of the config file.

LAN/WAN DHT

As forwarned in the RC1 release notes, RC2 includes the split LAN/WAN DHT. All IPFS nodes will now run two DHTs: one for the public internet (WAN) and one for their local network (LAN).

  • When connected to the public internet, IPFS will use both DHTs for finding peers, content, and IPNS records, but will only publish records (provider and IPNS) to the WAN DHT to avoid flooding the local network.
  • When not connected to the public internet, IPFS will publish provider and IPNS records to the LAN DHT.

This feature should not have any noticeable (performance or otherwise) impact and go-ipfs should continue to work in all the currently supported network configurations: VPNs, disconnected LANs, public internet, etc.

In a future release, we hope to use this feature to limit the advertisement of private addresses to the local LAN.

Release v0.5.0-rc1

07 Apr 08:20
v0.5.0-rc1
1a2c88b
Compare
Choose a tag to compare
Release v0.5.0-rc1 Pre-release
Pre-release

This is the first RC for go-ipfs 0.5.0. See #7109 for details.

Release v0.4.23

30 Jan 06:55
v0.4.23
Compare
Choose a tag to compare

Would sir/madam care for another patch release while they wait?

Yes that's right, the next feature release of go-ipfs (0.5.0) is, well, running a tiny bit behind schedule. In the mean time though we have patches, and I'm not talking pirate eye patches, I'm talking bug fixes. We're hunting these bugs like they're Pokemon, and jeez, do we come across some rare and difficult to fix ones? - you betcha.

Alright, enough funny business, what's the deal? Ok so, I don't want to alarm anyone but this release has some critical fixes and if you're using go-ipfs or know someone who is then you and your friends need to slide into your upgrade pants and give those IPFS nodes a good wipe down ASAP.

If you're a busy person and are feeling like you've read a little too much already, the TLDR; on the critical fixes is:

  1. We fixed a bug in the TLS transport that would (very rarely) cause disconnects during the handshake. You really should upgrade or you'll see this bug more and more when TLS is enabled by default in go-ipfs 0.5.0.
  2. We patched a commonly occurring bug in the websocket transport that was causing panics because of concurrent writes.

🔦 Highlights

🤝 Fixed Spontaneous TLS Disconnects

If this isn't reason enough to upgrade I don't know what is. Turns out, a TLS handshake may have accidentially been unintentionally aborted for no good reason 😱. Don't panic just yet! It's a really rare race condition and in go-ipfs 0.4.x the TLS transport is experimental (SECIO is currently the default).

Phew, ok, that said, in go-ipfs 0.5.0, TLS will be the default so don't delay, upgrade today!

😱 Fixed Panics and Crashes

Panicing won't help, in life, and also in golang. Stay calm and breathe slowly. We patched a number of panics and crashes that were uncovered, including a panic due to concurrent writes that you probably saw quite a lot if you were using the websocket transport. High ten 🙌?

🔁 Fixed Resursive Resolving of dnsaddr Multiaddrs

dnsaddrs can be recursive! That means a given dnsaddr can resolve to another dnsaddr. Not indefinitely though, don't try to trick us with your circular addresses - you get 32 goes on the ride maximum.

We found this issue when rolling out a brand spanking new set of bootstrap nodes only to discover their new addresses were, well, what's the opposite of recursive? It's not cursive...non-recursive I guess. Basically they resolved one time and then not again. I know right - bad news bears 🐻!?

Ok, "bear" this in mind: you want to keep all your DNS TXT records below 512 bytes to avoid UDP fragmentation, otherwise you'll get a truncated reply and have to connect with TCP to get all the records. If you have lots of dnsaddr TXT records then it can be more efficient to use recursive resolving than to get a truncated reply and go through the famous 18-way SYN, SYN-ACK ACK, ACK-SYN, ACK-ACK (...etc, etc) TCP handshake, not to mention the fact that go-ipfs will not even try to fallback to TCP 😅.

Anyway, long story short. We fixed recursive dnsaddr resolving so we didn't have to deal with UDP fragmentation. You're welcome.

📻 Retuned Connection Manager

The Connection Manager has been tuned to better prioritise existing connections by not counting new connections in the "grace" period (30s) towards connection limits. New connections are like new friends. You can't hang out with everyone all the time, I mean, it just gets difficult to book a resturant after a while.

You also wouldn't stop being friends with Jane just because you met Sarah once on the train. You and Jane have history, think of everything you've been through. Remember that time when Jane's dog, Dave, ran away? I know, it's a weird name for a dog, I mean who gives a human name to a dog anyway, but I guess that's one of the reasons you like Jane. Anyway, she lost her dog and you both looked all around town for it, you were about to give up but then you heared faint wimpering as you were walking back to the house. Dave had somehow managed to fall into the old abandoned well!

You see?! History! ...and, erh, what was I saying? Oh yeah, Connection Manager - new connections don't cause us to close useful, existing connections (like Jane). More specifically though, this change solves the problem of your peer receiving more inbound connections than the HighWater limit, causing it to disconnect from Jane, as well as all your other good friends (peers not in the grace period) in favor of connections that might not even work out. No-one wants to be friendless, and this fix avoids that awkward situation. Though, it does mean you'll keep more connections in total. Maybe consider reducing the HighWater setting in your config.

🍖 Reduced Relay Related DHT Spam

When AutoRelay was enabled, and your IPFS node was unreachable behind a NAT or something, go-ipfs would search the DHT for 3 relays with RelayHop enabled, connect to them and then advertise them as relays.

The problem is that many of the public relays had low connection limits and were overloaded. There's a lot of IPFS nodes in the network, and a lot of unreachable nodes trying their best to hop around via relays. So relay nodes were being DDoSed and they were constantly killing connections. Nodes trying to use the relays were on a continuous quest for better ones, which was causing 95% of the DHT traffic. Eek!

So, instead of spamming the DHT the whole time trying to find random, potentially poor relays, IPFS is now using a pre-defined set of autorelays. I mean, try to tell me that doesn't make sense.

🐾 Better Bitswap

Joe has the rare shiny collectable card you've been hunting for forever (since yesterday). You've spotted him, right over there on the other side of the playground. But now that you've found what you're looking for, you're so excited you forget what you were doing and start looking again.

This is exactly what bitswap is like when you have a bug where you stop trying to connect to providers once you've found enough of them. Specifically, if we found enough providers (100) or timed out the provider request, bitswap would cancel any in-progress connection attempts to providers and walk away.

We're also now marking frequently used peers as "important" in the connection manager so those connections do not get dropped. This is like, erm, you and Joe being besties. Joe has all the good cards and is surprisingly willing to part with them. Ok, I'll admit, card trading is probably not a great analogy to bitswap 😛

🦄 And More!

  • Fixed build on go 1.13
  • New version of the WebUI to fix some issues with the peers map

❤️ Contributors

Contributor Commits Lines ± Files Changed
Steven Allen 52 +1866/-578 102
vyzo 12 +167/-90 22
whyrusleeping 5 +136/-52 7
Roman Proskuryakov 7 +94/-7 10
Jakub Sztandera 3 +58/-13 7
hucg 2 +31/-11 2
Raúl Kripalani 2 +7/-33 6
Marten Seemann 3 +27/-10 5
Marcin Rataj 2 +26/-0 5
b5 1 +2/-22 1
Hector Sanjuan 1 +11/-0 1
Yusef Napora 1 +4/-0 1

Would you like to contribute to the IPFS project and don't know how? Well, there are a few places you can get started:

⁉️ Do you have questions?

The best place to ask your questions about IPFS, how it works and what you can do with it is at discuss.ipfs.io. We are also available at the #ipfs channel on Freenode, which is also accessible through our Matrix bridge.

Release v0.4.22

14 Aug 07:04
v0.4.22
Compare
Choose a tag to compare

We're releasing a PATCH release of go-ipfs based on 0.4.21 containing some critical fixes.

The IPFS network has scaled to the point where small changes can have a wide-reaching impact on the entire network. To keep this situation from escalating, we've put a hold on releasing new features until we can improve our release process (which we've trialed in this release) and testing procedures.

This release includes fixes for the following regressions:

  1. A major bitswap throughput regression introduced in 0.4.21 (ipfs/go-ipfs#6442).
  2. High bitswap CPU usage when connected to many (e.g. 10,000) peers. See ipfs/go-bitswap#154.
  3. The local network discovery service sometimes initializes before the networking module, causing it to announce the wrong addresses and sometimes complain about not being able to determine the IP address (ipfs/go-ipfs#6415).

It also includes fixes for:

  1. Pins not being persisted after ipfs block add --pin (ipfs/go-ipfs#6441).
  2. Panic due to concurrent map access when adding and listing pins at the same time (ipfs/go-ipfs#6419).
  3. Potential pin-set corruption given a concurrent ipfs repo gc and ipfs pin rm (ipfs/go-ipfs#6444).
  4. Build failure due to a deleted git tag in one of our dependencies (ipfs/go-ds-badger#64).

Thanks to:

Release v0.4.22-rc1

22 Jul 18:26
v0.4.22-rc1
Compare
Choose a tag to compare
Release v0.4.22-rc1 Pre-release
Pre-release

Track progress on #6506.