Skip to content

Releases: ipfs/kubo

v0.15.0-rc1

17 Aug 12:06
v0.15.0-rc1
Compare
Choose a tag to compare
v0.15.0-rc1 Pre-release
Pre-release

See the related issue: #9152

The draft changelog is in TODO.

v0.14.0

21 Jul 03:05
v0.14.0
e0fabd6
Compare
Choose a tag to compare

Kubo v0.14 release

Overview

Below is an outline of all that is in this release, so you get a sense of all that's included.

🛠 BREAKING CHANGES

Removed mdns_legacy implementation

The modern DNS-SD compatible zeroconf implementation
(based on this specification)
has been running next to the mdns_legacy for a while (since v0.11). During
this transitional period Kubo nodes were sending twice as many LAN packets,
which ends with this release: we've removed the legacy implementation.

🔦 Highlights

🛣️ Delegated Routing

Content routing is the a term used to describe the problem of finding providers for a given piece of content.
If you have a hash, or CID of some data, how do you find who has it?
In IPFS, until now, only a DHT was used as a decentralized answer to content routing.
Now, content routing can be handled by clients implementing the Reframe protocol.

Example configuration usage using the Filecoin Network Indexer:

ipfs config Routing.Routers.CidContact --json '{
  "Type": "reframe",
  "Parameters": {
    "Endpoint": "https://cid.contact/reframe"
  }
}'

👥 Rename to Kubo

We've renamed Go-IPFS to Kubo (details).

Published artifacts use kubo now, and are available at:

To minimize the impact on infrastructure that autoupdates on a new release,
the same binaries are still published under the old name at:

The libp2p identify useragent of Kubo has also been changed from go-ipfs to kubo.

🎒 ipfs repo migrate

This new command allows the you to run the repo migration without starting the daemon.

See ipfs repo migrate --help for more info.

🚀 Emoji support in Multibase

Kubo now supports base256emoji encoding in all Multibase contexts. Use it for testing Unicode support, as visual aid while explaining Multiformats, or just for fun:

$ echo -n "test" | ipfs multibase encode -b base256emoji -
🚀😈✋🌈😈

$ echo -n "🚀😈✋🌈😈" | ipfs multibase decode -
test

$ ipfs cid format -v 1 -b base256emoji bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi
🚀🪐⭐💻😅❓💎🌈🌸🌚💰💍🌒😵🐶💁🤐🌎👼🙃🙅☺🌚😞🤤⭐🚀😃✈🌕😚🍻💜🐷⚽✌😊

/ipfs/🚀🪐⭐💻😅❓💎🌈🌸🌚💰💍🌒😵🐶💁🤐🌎👼🙃🙅☺🌚😞🤤⭐🚀😃✈🌕😚🍻💜🐷⚽✌😊

Changelog

Full Changelog
Read more

v0.14.0-rc1

07 Jul 22:06
v0.14.0-rc1
Compare
Choose a tag to compare
v0.14.0-rc1 Pre-release
Pre-release

See the related issue: #9032

Or the draft changelog: docs/changelogs/v0.14.md

v0.13.1

06 Jul 15:09
v0.13.1
8ffc7a8
Compare
Choose a tag to compare

go-ipfs v0.13.1 Release

This release includes security fixes for various DOS vectors when importing untrusted user input with ipfs dag import
and the v0/dag/import endpoint.

View the linked security advisory for more information.

Changelog

Full Changelog
  • github.com/ipfs/go-ipfs:
    • chore: update car
  • github.com/ipld/go-car (v0.3.2 -> v0.4.0) & (v2.1.1 -> v2.4.0):
    • Bump version in prep for releasing go-car v0
    • Revert changes to insertionindex
    • Revert changes to index.Index while keeping most of security fixes
    • Return error when section length is invalid varint
    • Drop repeated package name from CarStats
    • Benchmark Reader.Inspect with and without hash validation
    • Use consistent CID mismatch error in Inspect and BlockReader.Next
    • Use streaming APIs to verify the hash of blocks in CAR Inspect
    • test: add fuzzing for reader#Inspect
    • feat: add block hash validation to Inspect()
    • feat: add Reader#Inspect() function to check basic validity of a CAR and return stats
    • Remove support for ForEach enumeration from car-index-sorted
    • Use a fix code as the multihash code for CarIndexSorted
    • Fix testutil assertion logic and update index generation tests
    • fix: tighter constraint of singleWidthIndex width, add index recommentation docs
    • fix: explicitly disable serialization of insertionindex
    • feat: MaxAllowed{Header,Section}Size option
    • feat: MaxAllowedSectionSize default to 32M
    • fix: use CidFromReader() which has overread and OOM protection
    • fix: staticcheck catches
    • fix: revert to internalio.NewOffsetReadSeeker in Reader#IndexReader
    • fix index comparisons
    • feat: Refactor indexes to put storage considerations on consumers
    • test: v2 add fuzzing of the index
    • fix: v2 don't divide by zero in width indexes
    • fix: v2 don't allocate indexes too big
    • test: v2 add fuzzing to Reader
    • fix: v2 don't accept overflowing offsets while reading v2 headers
    • test: v2 add fuzzing to BlockReader
    • fix: v2 don't OOM if the header size is too big
    • test: add fuzzing of NewCarReader
    • fix: do bound check while checking for CIDv0
    • fix: don't OOM if the header size is too big
    • Add API to regenerate index from CARv1 or CARv2
    • PrototypeChooser support (#305) (ipld/go-car#305)
    • bump to newer blockstore err not found (#301) (ipld/go-car#301)
    • Car command supports for largebytes nodes (#296) (ipld/go-car#296)
    • fix(test): rootless fixture should have no roots, not null roots
    • Allow extracton of a raw unixfs file (#284) (ipld/go-car#284)
    • cmd/car: use a better install command in the README
    • feat: --version selector for car create & update deps
    • feat: add option to create blockstore that writes a plain CARv1 (#288) (ipld/go-car#288)
    • add car detach-index list to list detached index contents (#287) (ipld/go-car#287)
    • add car root command (#283) (ipld/go-car#283)
    • make specification of root cid in get-dag command optional (#281) (ipld/go-car#281)
    • Update version.json after manual tag push
    • Update v2 to context datastores (#275) (ipld/go-car#275)
    • update context datastore (ipld/go-car#273)
    • Traversal-based car creation (#269) (ipld/go-car#269)
    • Seek to start before index generation in ReadOnly blockstore
    • support extraction of unixfs content stored in car files (#263) (ipld/go-car#263)
    • Add a barebones readme to the car CLI (#262) (ipld/go-car#262)
    • sync: update CI config files (#261) (ipld/go-car#261)
    • fix!: use -version=n instead of -v1 for index command
    • feat: fix get-dag and add version=1 option
    • creation of car from file / directory (#246) (ipld/go-car#246)
    • forEach iterates over index in stable order (#258) (ipld/go-car#258)
  • github.com/multiformats/go-multicodec (v0.4.1 -> v0.5.0):
    • Bump version to 0.5.0
    • Bump version to 0.4.2
    • deps: update stringer version in go generate command
    • docs(readme): improved usage examples (#66) (multiformats/go-multicodec#66)

❤ Contributors

Contributor Commits Lines ± Files Changed
Masih H. Derkani 27 +1494/-1446 100
Rod Vagg 31 +2021/-606 105
Will 19 +1898/-151 69
Jorropo 27 +1638/-248 76
Aayush Rajasekaran 1 +130/-100 10
whyrusleeping 1 +24/-22 4
Marcin Rataj 1 +27/-1 1

v0.13.0

09 Jun 18:22
v0.13.0
c9d51bb
Compare
Choose a tag to compare

go-ipfs v0.13.0 Release

We're happy to announce go-ipfs 0.13.0, packed full of changes and improvements!

As usual, this release includes important fixes, some of which may be critical for security. Unless the fix addresses a bug being exploited in the wild, the fix will not be called out in the release notes. Please make sure to update ASAP. See our release process for details.

Overview

Below is an outline of all that is in this release, so you get a sense of all that's included.

🛠 BREAKING CHANGES

ipfs block put command

ipfs block put command returns a CIDv1 with raw codec by default now.

  • ipfs block put --cid-codec makes block put return CID with alternative codec
    • This impacts only the returned CID; it does not trigger any validation or data transformation.
    • Retrieving a block with a different codec or CID version than it was put with is valid.
    • Codec names are validated against tables from go-multicodec library.
  • ipfs block put --format is deprecated. It used incorrect codec names and should be avoided for new deployments. Use it only if you need the old, invalid behavior, namely:
    • ipfs block put --format=v0 will produce CIDv0 (implicit dag-pb)
    • ipfs block put --format=cbor will produce CIDv1 with dag-cbor (!)
    • ipfs block put --format=protobuf will produce CIDv1 with dag-pb (!)

ipfs cid codecs command

  • Now lists codecs from go-multicodec library.
  • ipfs cid codecs --supported can be passed to only show codecs supported in various go-ipfs commands.

ipfs cid format command

  • --codec was removed and replaced with --mc to ensure existing users are aware of the following changes:
    • --mc protobuf now correctly points to code 0x50 (was 0x70, which is dab-pg)
    • --mc cbor now correctly points to code 0x51 (was 0x71, which is dag-cbor)

Swarm configuration

  • Daemon will refuse to start if long-deprecated RelayV1 config key Swarm.EnableAutoRelay or Swarm.DisableRelay is set to true.
  • If Swarm.Transports.Network.Relay is disabled, then Swarm.RelayService and Swarm.RelayClient are also disabled (unless they have been explicitly enabled).

Circuit Relay V1 is deprecated

  • By default, Swarm.RelayClient does not use Circuit Relay V1. Circuit V1 support is only enabled when Swarm.RelayClient.StaticRelays are specified.

ls requests for /multistream/1.0.0 are removed

  • go-libp2p 0.19 removed support for undocumented ls command (PR). If you are still using it for internal testing, it is time to refactor (example)

Gateway Behavior

Directory listings returned by the HTTP Gateway won't have size column if the directory is bigger than Gateway.FastDirIndexThreshold config (default is 100).

To understand the wider context why we made these changes, read Highlights below.

🔦 Highlights

🧑‍💼 libp2p Network Resource Manager (Swarm.ResourceMgr)

You can now easily bound how much resource usage libp2p consumes! This aids in protecting nodes from consuming more resources then are available to them.

The libp2p Network Resource Manager is disabled by default, but can be enabled via:

ipfs config --json Swarm.ResourceMgr.Enabled true

When enabled, it applies some safe defaults that can be inspected and adjusted with:

  • ipfs swarm stats --help
  • ipfs swarm limit --help

User changes persist to config at Swarm.ResourceMgr.

The Resource Manager will be enabled by default in a future release.

🔃 Relay V2 client with auto discovery (Swarm.RelayClient)

All the pieces are enabled for hole-punching by default, improving connecting with nodes behind NATs and Firewalls!

This release enables Swarm.RelayClient by default, along with circuit v2 relay discovery provided by go-libp2p v0.19.0. This means:

  1. go-ipfs will coordinate with the counterparty using a relayed connection, to upgrade to a direct connection through a NAT/firewall whenever possible.
  2. go-ipfs daemon will automatically use public relays if it detects that it cannot be reached from the public internet (e.g., it's behind a firewall). This results in a /p2p-circuit address from a public relay.

Notes:

🌉 HTTP Gateway improvements

HTTP Gateway enables seamless interop with the existing Web, clients, user agents, tools, frameworks and libraries.

This release ships the first batch of improvements that enable creation of faster and smarter CDNs, and unblocks creation of light clients for Mobile and IoT.

Details below.

🍱 Support for Block and CAR response formats

Alternative response formats from Gateway can be requested to avoid needing to trust a gateway.

For now, {format} is limited to two options:

  • raw – fetching single block
  • car – fetching entire DAG behind a CID as a CARv1 stream

When not set, the default UnixFS response is returned.

Why these two formats? Requesting Block or CAR for /ipfs/{cid} allows a client to use gateways in a trustless fashion. These types of gateway responses can be verified locally and rejected if digest inside of requested CID does not match received bytes. This enables creation of "light IPFS clients" which use HTTP Gateways as inexpensive transport for content-addressed data, unlocking use in Mobile and IoT contexts.

Future releases will add support for dag-json and dag-cbor responses.

There are two ways for requesting CID specific response format:

  1. HTTP header: Accept: application/vnd.ipld.{format}
  1. URL paramerer: ?format=
  • Useful for creating "Download CAR" links.

Usage examples:

  1. Downloading a single raw Block and manually importing it to the local datastore:
$ curl  -H 'Accept: application/vnd.ipld.raw' "http://127.0.0.1:8080/ipfs/QmZULkCELmmk5XNfCgTnCyFgAVxBRBXyDHGGMVoLFLiXEN" --output block.bin
...
Read more

v0.13.0-rc1

04 May 22:40
v0.13.0-rc1
Compare
Choose a tag to compare
v0.13.0-rc1 Pre-release
Pre-release

Tracking Issue: #8640

v0.12.2

08 Apr 21:40
v0.12.2
0e8b121
Compare
Choose a tag to compare

go-ipfs v0.12.2 Release

This patch release fixes a security issue wherein traversing some malformed DAGs can cause the node to panic.

Given that some users haven't yet gone through the migration for v0.12.0, we have also backported this to v0.11.1.

See also the security advisory: GHSA-mcq2-w56r-5w2w

Changelog

Full Changelog - github.com/ipld/go-codec-dagpb (v1.3.0 -> v1.3.2): - fix: use protowire for Links bytes decoding

❤ Contributors

Contributor Commits Lines ± Files Changed
Rod Vagg 1 +34/-19 2

v0.11.1

08 Apr 21:42
v0.11.1
Compare
Choose a tag to compare

go-ipfs v0.11.1 Release

This patch release covers a couple of security fixes

Malformed DAG Traversal

This patch release fixes a security issue wherein traversing some malformed DAGs can cause the node to panic.

This was backported from v0.12.2, since some users haven't yet gone through the v0.12 migration.

See also the security advisory: GHSA-mcq2-w56r-5w2w

Docker Compose Ports

This patch release fixes a security issue with the docker-compose.yaml file in which the IPFS daemon API listens on all interfaces instead of only the loopback interface, which could allow remote callers to control your IPFS daemon. If you use the included docker-compose.yaml file, it is recommended to upgrade.

See also the security advisory: GHSA-fx5p-f64h-93xc

Thanks to @LynHyper for finding and disclosing this.

Changelog

Full Changelog - github.com/ipfs/go-ipfs: - fix: listen on loopback for API and gateway ports in docker-compose.yaml - github.com/ipld/go-codec-dagpb (v1.3.0 -> v1.3.2): - fix: use protowire for Links bytes decoding

❤ Contributors

Contributor Commits Lines ± Files Changed
Rod Vagg 1 +34/-19 2
guseggert 1 +10/-3 1

v0.12.1

19 Mar 13:11
v0.12.1
da2b9bd
Compare
Choose a tag to compare

go-ipfs v0.12.1 Release

This patch release fixes a security issue with the docker-compose.yaml file in which the IPFS daemon API listens on all interfaces instead of only the loopback interface, which could allow remote callers to control your IPFS daemon. If you use the included docker-compose.yaml file, it is recommended to upgrade.

See also the security advisory: GHSA-fx5p-f64h-93xc

Thanks to @LynHyper for finding and disclosing this.

Changelog

Full Changelog
  • github.com/ipfs/go-ipfs:
    • fix: listen on loopback for API and gateway ports in docker-compose.yaml

❤ Contributors

Contributor Commits Lines ± Files Changed
guseggert 1 +10/-3 1

v0.12.0

18 Feb 15:20
v0.12.0
06191df
Compare
Choose a tag to compare

go-ipfs 0.12.0 Release

We're happy to announce go-ipfs 0.12.0. This release switches the storage of IPLD blocks to be keyed by multihash instead of CID.

As usual, this release includes important fixes, some of which may be critical for security. Unless the fix addresses a bug being exploited in the wild, the fix will not be called out in the release notes. Please make sure to update ASAP. See our release process for details.

🛠 BREAKING CHANGES

  • ipfs refs local will now list all blocks as if they were raw CIDv1 instead of with whatever CID version and IPLD codecs they were stored with. All other functionality should remain the same.

Note: This change also effects ipfs-update so if you use that tool to mange your go-ipfs installation then grab ipfs-update v1.8.0 from dist.

Keep reading to learn more details.

🔦 Highlights

There is only one change since 0.11:

Blockstore migration from full CID to Multihash keys

We are switching the default low level datastore to be keyed only by the Multihash part of the CID, and deduplicate some blocks in the process. The blockstore will become codec-agnostic.

Rationale

The blockstore/datastore layers are not concerned with data interpretation, only with storage of binary blocks and verification that the Multihash they are addressed with (which comes from the CID), matches the block. In fact, different CIDs, with different codecs prefixes, may be carrying the same multihash, and referencing the same block. Carrying the CID abstraction so low on the stack means potentially fetching and storing the same blocks multiple times just because they are referenced by different CIDs. Prior to this change, a CIDv1 with a dag-cbor codec and a CIDv1 with a raw codec, both containing the same multihash, would result in two identical blocks stored. A CIDv0 and CIDv1 both being the same dag-pb block would also result in two copies.

How migration works

In order to perform the switch, and start referencing all blocks by their multihash, a migration will occur on update. This migration will take the repository version from 11 (current) to 12.

One thing to note is that any content addressed CIDv0 (all the hashes that start with Qm..., the current default in go-ipfs), does not need any migration, as CIDv0 are raw multihashes already. This means the migration will be very lightweight for the majority of users.

The migration process will take care of re-keying any CIDv1 block so that it is only addressed by its multihash. Large nodes with lots of CIDv1-addressed content will need to go through a heavier process as the migration happens. This is how the migration works:

  1. Phase 1: The migration script will perform a pass for every block in the datastore and will add all CIDv1s found to a file named 11-to-12-cids.txt, in the go-ipfs configuration folder. Nothing is written in this first phase and it only serves to identify keys that will be migrated in phase 2.
  2. Phase 2: The migration script will perform a second pass where every CIDv1 block will be read and re-written with its raw-multihash as key. There is 1 worker performing this task, although more can be configured. Every 100MiB-worth of blocks (this is configurable), each worker will trigger a datastore "sync" (to ensure all written data is flushed to disk) and delete the CIDv1-addressed blocks that were just renamed. This provides a good compromise between speed and resources needed to run the migration.

At every sync, the migration emits a log message showing how many blocks need to be rewritten and how far the process is.

FlatFS specific migration

For those using a single FlatFS datastore as their backing blockstore (i.e. the default behavior), the migration (but not reversion) will take advantage of the ability to easily move/rename the blocks to improve migration performance.

Unfortunately, other common datastores do not support renames which is what makes this FlatFS specific. If you are running a large custom datastore that supports renames you may want to consider running a fork of fs-repo-11-to-12 specific to your datastore.

If you want to disable this behavior, set the environment variable IPFS_FS_MIGRATION_11_TO_12_ENABLE_FLATFS_FASTPATH to false.

Migration configuration

For those who want to tune the migration more precisely for their setups, there are two environment variables to configure:

  • IPFS_FS_MIGRATION_11_TO_12_NWORKERS : an integer describing the number of migration workers - defaults to 1
  • IPFS_FS_MIGRATION_11_TO_12_SYNC_SIZE_BYTES : an integer describing the number of bytes after which migration workers will sync - defaults to 104857600 (i.e. 100MiB)

Migration caveats

Large repositories with very large numbers of CIDv1s should be mindful of the migration process:

  • We recommend ensuring that IPFS runs with an appropriate (high) file-descriptor limit, particularly when Badger is use as datastore backend. Badger is known to open many tables when experiencing a high number of writes, which may trigger "too many files open" type of errors during the migrations. If this happens, the migration can be retried with a higher FD limit (see below).
  • Migrations using the Badger datastore may not immediately reclaim the space freed by the deletion of migrated blocks, thus space requirements may grow considerably. A periodic Badger-GC is run every 2 minutes, which will reclaim space used by deleted and de-duplicated blocks. The last portion of the space will only be reclaimed after go-ipfs starts (the Badger-GC cycle will trigger after 15 minutes).
  • While there is a revert process detailed below, we recommend keeping a backup of the repository, particularly for very large ones, in case an issue happens, so that the revert can happen immediately and cases of repository corruption due to crashes or unexpected circumstances are not catastrophic.

Migration interruptions and retries

If a problem occurs during the migration, it is be possible to simply re-start and retry it:

  1. Phase 1 will never overwrite the 11-to-12-cids.txt file, but only append to it (so that a list of things we were supposed to have migrated during our first attempt is not lost - this is important for reverts, see below).
  2. Phase 2 will proceed to continue re-keying blocks that were not re-keyed during previous attempts.

Migration reverts

It is also possible to revert the migration after it has succeeded, for example to go to a previous go-ipfs version (<=0.11), even after starting and using go-ipfs in the new version (>=0.12). The revert process works as follows:

  1. The 11-to-12-cids.txt file is read, which has the list of all the CIDv1s that had to be rewritten for the migration.
  2. A CIDv1-addressed block is written for every item on the list. This work is performed by 1 worker (configurable), syncing every 100MiB (configurable).
  3. It is ensured that every CIDv1 pin, and every CIDv1 reference in MFS, are also written as CIDV1-addressed blocks, regardless of whether they were part of the original migration or were added later.

The revert process does not delete any blocks--it only makes sure that blocks that were accessible with CIDv1s before the migration are again keyed with CIDv1s. This may result in a datastore becoming twice as large (i.e. if all the blocks were CIDv1-addressed before the migration). This is however done this way to cover corner cases: user can add CIDv1s after migration, which may reference blocks that existed as CIDv0 before migration. The revert aims to ensure that no data becomes unavailable on downgrade.

While go-ipfs will auto-run the migration for you, it will not run the reversion. To do so you can download the latest migration binary or use ipfs-update.

Custom datastores

As with previous migrations if you work with custom datastores and want to leverage the migration you can run a fork of fs-repo-11-to-12 specific to your datastore. The repo includes instructions on building for different datastores.

For this migration, if your datastore has fast renames you may want to consider writing some code to leverage the particular efficiencies of your datastore similar to what was done for FlatFS.

Changelog

Full Changelog
  • github.com/ipfs/go-ipfs:
    • Release v0.12.0
    • docs: v0.12.0 release notes
    • chore: bump migrations dist.ipfs.io CID to contain fs-repo-11-to-12 v1.0.2
    • feat: refactor Fetcher interface used for downloading migrations (#8728) (ipfs/go-ipfs#8728)
    • feat: log multifetcher errors
    • Release v0.12.0-rc1
    • chore: bump Go version to 1.16.12
    • feat: switch to raw multihashes for blocks (ipfs/go-ipfs#6816)
    • chore: add release template snippet for fetching artifact tarball
    • chore: bump Go version to 1.16.11
    • chore: add release steps for upgrading Go
    • Merge branch 'release'
    • fix(corehttp): adjust peer counting metrics (#8577) (ipfs/go-ipfs#8577)
    • chore: update version to v0.12.0-dev
  • github.com/ipfs/go-filestore (v0.1.0 -> v1.1...
Read more