Skip to content

Releases: ethersphere/bee

v2.1.0

28 May 10:21
de7eccc
Compare
Choose a tag to compare

v2.1.0

The Bee team is excited to announce the v2.1.0 release. 🎉

In this release, localstore transactions have been refactored to bring increased stability and performance gains.

We have also detected that some nodes have experienced the corruption of their reserves. To address this, the release introduces the new bee db repair-reserve --data-dir=... command, which will scan the node’s reserve and fix any corrupted chunks. All node operators should make sure to run this command immediately following the update.

Warning

Make sure to run the command one by one rather than concurrently for nodes which are running on the same physical disk, since running the command concurrently on multiple nodes could lead to drastic slowdowns. As is the case with all the db commands, the nodes must be stopped first.

The release also includes a new redistribution contract which introduces a limit to the number of freezes per round. The specific rate of the limit is configurable by the team. At the time of the release, the default behavior will be the same as the old contract. The goal of the new db repair-reserve command and the localstore improvements is to decrease the freezing rate so it is closer to an acceptable level, in which case the freezing limit can be left untouched.

The Bee team will also coordinate the pausing of the old contract based on a predetermined block height of the Gnosis Chain.

With this release, the endpoints under the Debug API have been also included in the main API. The Debug API will be removed entirely in the next release (v2.2.0)

For questions, comments, and feedback, reach out on Discord.

Features

  • A new redistribution contract has been released that controls the freezing limit. #240

Bug fixes

  • Fixed an error when uploading the same file with pinning multiple times. (#4638)
  • Fixed a data race in the reserve sampler which may resolve inclusion proof related errors in the redistribution game. (#4665)

Hardening

  • Localstore refactoring (#4626)
    • The same leveldb transaction is now used for both indexstore and chunkstore writes.
    • The stewardship upload endpoint now requires a valid batchID in the request header.
    • When the reserve capacity is reached, only enough chunks to fall below the capacity are evicted. Previously, the evictor would remove the entire bin of chunks belonging to a batch, without regard to how much capacity is recovered during the process. With this change, the loss of chunks belonging to shallower bins than the storage radius in the neighborhood is minimized.
    • When the radius decreases, the bins which have been evicted previously are all properly reset to re-initiate syncing.
  • Improved logging when the node is out balance for buying a batch. (#4666)

For a full PR rundown, please consult the v2.1.0 diff.

v2.1.0-rc2

13 May 19:32
ffc1aef
Compare
Choose a tag to compare
v2.1.0-rc2 Pre-release
Pre-release

Changelog

v2.1.0-rc1

09 May 17:06
52c2475
Compare
Choose a tag to compare
v2.1.0-rc1 Pre-release
Pre-release

Changelog

v2.0.1

25 Apr 12:06
Compare
Choose a tag to compare

v2.0.01

This is patch release that updates libp2p to the latest version which addresses a memory leak issue.

v2.0.0

26 Mar 11:54
501f8a4
Compare
Choose a tag to compare

v2.0.0

The Bee team is elated to announce the official v2.0.0 release. 🎉

In this release we introduce a brand new mechanism of data redundancy in Swarm with erasure coding, which, under the hood, makes use of Reed-Solomon erasure coding and dispersed replicas. This brings a whole new level of protection against potential data loss.

Erasure Coding

A new header Swarm-Redundancy-Level: n can be passed to upload requests to turn on erasure coding where n is [0, 4]. Refer to the table below for different levels of redundancy and chunk loss tolerance.

Redundancy Level Pseudonym Chunk Retrieval Failure Tolerance
0 None 0%
1 Medium 1%
2 Strong 5%
3 Insane 10%
4 Paranoid 50%

Testnet

With this milestone release, the Swarm Testnet is now officially running on the Sepolia blockchain.

Apply the configuration changes below to a fresh node to be able connect to the Sepolia Testnet.

bootnode:
- /dnsaddr/sepolia.testnet.ethswarm.org
blockchain-rpc-endpoint: {a-sepolia-rpc-endpoint}

For questions, comments, and feedback, reach out on Discord.

Features

  • Uploads may now be equipped with erasure coding which brings a new level of data redundancy to Swarm. ( #4491 ).
  • Added a new API endpoint to obtain the content type and length of uploads using the /bzz endpoint with the Head request type. ( #4588 )
  • Re-added livesyncing to chunk syncing in the puller service. ( #4554 )
  • Default testnet setting are now configured for the Sepolia blockchain. ( #4491 )
  • Added a new db command that verifies the integrity of the pinned content. ( #4565 )
  • The pinned content integrity verification can also be done using the API, namely with the new pins/check endpoint. ( #4573 )
  • Added the ability for fresh nodes to use an external neighborhood suggester through the config options for mining the overlay address into a specific neighborhood. By default, the Swarmscanner's least populated/most optimal neighborhood suggester API is used. ( #4580 )

Bug fixes

  • Localstore fixes
    • Fixed a bug where deleting a pin reference that has been pinned more than once was not removing the chunks from the localstore. ( #4558 )
    • Fixed a race condition in the cachestore that was causing refCnt inconsistencies. ( #4525 )
    • Fixed a bug in the cachestore that would not deference already cached chunks after a reserve eviction. ( #4567 )
    • Fixed a cache size bug that would undercount the number of chunks removed from the cache, leading to a cache leak until the next restart of the node. ( #4571 )
    • Fixed a leak in the upload store where the metadata of the individual chunks persists in the localstore long after the chunks have been successfully uploaded to the network. ( #4562 )
    • Fixed the storage radius metric being set incorrectly. ( #4518 )
    • Fixed a bug where the storage radius does not decrease even though the true reserve size is lower than the threshold. ( #4514 )
  • Fixed a vulnerability in the encryption of uploaded data. ( #4604 )

Hardening

  • Updated the btcd crypto library version. ( #4516 )
  • ReserveSizeWithRadius field, which is the count of chunks in the reserve that falls under the responsibility of the node has been added to the status protocol. ( #4585 )
  • Stamper changes
    • The rules for how chunks are stamped before uploading have been changed: regardless of batch type (immutable or mutable), if a chunk has been stamped before, the chunk is restamped using the old batch index and a new timestamp. ( #4556 )
    • Regardless of batch type, the reserve now overwrites chunks that have the same batch index with the higher timestamp. ( #4559 )

For a full PR rundown, please consult the v2.0.0 diff.

v1.18.2

14 Dec 14:56
759f56f
Compare
Choose a tag to compare

Building upon the previous release, the sync intervals are re-synced so that nodes may collect any potentially missing chunks from the network.

The initial syncing a node performs to collect missing chunks from peers, aka historical syncing, is now rate limited to lower and stabilize CPU usage.

For questions, comments, and feedback, reach out on Discord.

Bug fixes

  • Fixed a panic when running compact with an empty db. ( #4488 )

Features

  • Puller historical syncing is now rate limited to not exceed 500 chunks/second. ( #4504 )

Hardening

  • Puller sync intervals are reset to sync missing chunks. ( #4499 )
  • Various UX improvements. ( #4487 #4466 #4489 )

For a full PR rundown, please consult the v1.18.2 diff.

v1.18.1

07 Dec 09:47
ed24b89
Compare
Choose a tag to compare

This is a patch release that properly resets the batchstore so that batches can be resynced from the new postage stamp contract.

For questions, comments, and feedback, reach out on Discord.

For a full PR rundown please consult the v1.18.1 diff.

v1.18.0

06 Dec 16:13
dd14545
Compare
Choose a tag to compare

The main theme of this release is the delivery of the last phase of storage incentives, the fourth phase, and thus the end of the storage incentive saga. For this reason, this is a breaking release, as the handshake version has been bumped. The release also includes one bug fix and minor improvements, which can be found below.

Breaking changes

  • The handshake protocol has been bumped as there is a new redistribution contract release (#4490)

New features

  • Introduction of a command that lists all chunk hashes for a given file (#4484)
  • Swarm cache header has been added to several API endpoints (#4457, #4486)
  • Phase four of storage incentives (#4373)

Bugfixes

  • Re-upload of a file that was previously manually cancelled during upload (#4468)

v1.17.6

09 Nov 12:24
48a603c
Compare
Choose a tag to compare

v1.17.6

With this release, many hardening issues were tackled. The team's focus has been mostly on improving connectivity of nodes across the network and bringing performance improvements to chunk caching operations.

Also featured is a new DB command that will perform a chunk validation of the chunkstore, similar to the optional step in the compaction command.

The retrieval protocol now has a similar multiplexing capability like pushsync, where multiple, in parallel, requests are fired from a forwarder peer that can directly access the neighborhood of a chunk.

For questions, comments, and feedback, reach out on Discord.

Bug fixes

  • Fixed a bug where parallel uploads can cause a race condition in the uploadstore. ( #4434 )

Features

  • Added a new DB cmd that performs a validation check of the chunks in the chunkstore. ( #4435 )
  • Added multiplexing to the retrieval protocol where a forwarding peer that can reach in to the neighborhood of the chunk fires multiple attempts to different peers. ( #4405 )

Hardening

  • Added extra documentation about the logger API in the CODING.md. ( #4406 )
  • Fixed logs containing wrong token name. ( #4408 )
  • Added metrics for zero addressed chunks received by the pullsync protocol. ( #4407 )
  • Kademlia depth value is overwritten by the storage radius for full nodes. ( #4410 )
  • Salud response duration check is now more stricter. ( #4417 #4426 )
  • Upgraded libp2p to the latest version v0.30.0. ( #3927 )
  • When batches expire and are removed the batchstore, the stamp issuer data is also removed ( #4416 #4431 #4439 )
  • Add a new log to display the amount of time the postage listener will sleep for until the next blockchain sync event ( #4444 #4426 )
  • API now returns 404 instead of 500 when no peer can be found for a chunk retrieval attempt. ( #4436 )
  • Upgraded crypto related packages. ( #4425 )
  • Added various connectivity related improvements: ( #4412 )
    • The reachability of connected peers is tested periodically instead of once at the time of the initial connection.
    • All and not a small subset of neighbor peers are broadcast to a newly connected neighbor.
    • Neighbor peers are periodically broadcast to other neighbors.
    • A peer will be re-added to the addressbook if hive detects an underlay change.

Performance

  • Added a cache eviction worker so cached chunks do not need to removed immediately when adding new entries to an over-capacity cache. ( #4423 #4433 )
  • The POST /pins/{ref} API endpoint now stores chunks in parallel. ( #4427 )

For a full PR rundown please consult the v1.17.6 diff.

v1.17.5

16 Oct 14:25
Compare
Choose a tag to compare

v1.17.5

In this small but important release, the Bee team introduces a new db compaction command to recover disk space. To prevent any data loss, operators should run the compaction on a copy of the localstore directory and, if successful, replace the original localstore with the compacted copy. The command is available as a sub-command under db as such:

bee db compact --data-dir=

The pushsync and retrieval protocols now feature a fallback mechanism of trying un-reachable and un-healthy peers in the case that no reachable or healthy peers are left.

We've also added new logging guidelines for contributors in the readme.

For questions, comments, and feedback, reach out on Discord.

Bug fixes

  • Fixed a bug where a node can get stuck syncing the same interval if the upstream peer is unable to send the chunk data. ( #4339 )

Features

  • Added a new localstore compaction command that resizes sharky to the smallest size possible. ( #4329 )

Hardening

  • Added a new logging guideline for contributors ( #4352)
  • Improved logging of the retrieval pkg and increased the min healthy peers per bin in the salud service.
  • Varying levels of peer filtering for the pushsync and retrieval protocols ( #4388 )

For a full PR rundown please consult the v1.17.5 diff.