From b1718a7d8c66d0f7c77df9a002f8a5a2e0f368dc Mon Sep 17 00:00:00 2001 From: Robert Zaremba Date: Sat, 30 Oct 2021 14:43:04 +0100 Subject: [PATCH 1/3] style: lint go and markdown (#10060) ## Description + fixing `x/bank/migrations/v44.migrateDenomMetadata` - we could potentially put a wrong data in a new key if the old keys have variable length. + linting the code Putting in the same PR because i found the issue when running a linter. Depends on: #10112 --- ### Author Checklist *All items are required. Please add a note to the item if the item is not applicable and please add links to any relevant follow up issues.* I have... - [x] included the correct [type prefix](https://github.com/commitizen/conventional-commit-types/blob/v3.0.0/index.json) in the PR title - [x] added `!` to the type prefix if API or client breaking change - [x] targeted the correct branch (see [PR Targeting](https://github.com/cosmos/cosmos-sdk/blob/master/CONTRIBUTING.md#pr-targeting)) - [ ] provided a link to the relevant issue or specification - [x] followed the guidelines for [building modules](https://github.com/cosmos/cosmos-sdk/blob/master/docs/building-modules) - [ ] included the necessary unit and integration [tests](https://github.com/cosmos/cosmos-sdk/blob/master/CONTRIBUTING.md#testing) - [ ] added a changelog entry to `CHANGELOG.md` - [ ] included comments for [documenting Go code](https://blog.golang.org/godoc) - [ ] updated the relevant documentation or specification - [ ] reviewed "Files changed" and left comments if necessary - [ ] confirmed all CI checks have passed ### Reviewers Checklist *All items are required. Please add a note if the item is not applicable and please add your handle next to the items reviewed if you only reviewed selected items.* I have... - [ ] confirmed the correct [type prefix](https://github.com/commitizen/conventional-commit-types/blob/v3.0.0/index.json) in the PR title - [ ] confirmed `!` in the type prefix if API or client breaking change - [ ] confirmed all author checklist items have been addressed - [ ] reviewed state machine logic - [ ] reviewed API design and naming - [ ] reviewed documentation is accurate - [ ] reviewed tests and test coverage - [ ] manually tested (if applicable) (cherry picked from commit 479485f95dbb4f3721d688823b32370cb0020af3) # Conflicts: # CODING_GUIDELINES.md # CONTRIBUTING.md # STABLE_RELEASES.md # contrib/rosetta/README.md # cosmovisor/README.md # crypto/keyring/keyring.go # db/README.md # docs/404.md # docs/DOCS_README.md # docs/architecture/adr-038-state-listening.md # docs/architecture/adr-040-storage-and-smt-state-commitments.md # docs/architecture/adr-043-nft-module.md # docs/architecture/adr-044-protobuf-updates-guidelines.md # docs/architecture/adr-046-module-params.md # docs/migrations/pre-upgrade.md # docs/migrations/rest.md # docs/ru/README.md # docs/run-node/rosetta.md # docs/run-node/run-node.md # docs/run-node/run-testnet.md # go.mod # scripts/module-tests.sh # snapshots/README.md # store/streaming/README.md # store/streaming/file/README.md # store/v2/flat/store.go # store/v2/smt/store.go # x/auth/ante/sigverify.go # x/auth/middleware/basic.go # x/auth/spec/01_concepts.md # x/auth/spec/05_vesting.md # x/auth/spec/07_client.md # x/authz/spec/05_client.md # x/bank/spec/README.md # x/crisis/spec/05_client.md # x/distribution/spec/README.md # x/epoching/keeper/keeper.go # x/epoching/spec/03_to_improve.md # x/evidence/spec/07_client.md # x/feegrant/spec/README.md # x/gov/spec/01_concepts.md # x/gov/spec/07_client.md # x/group/internal/orm/spec/01_table.md # x/mint/spec/06_client.md # x/slashing/spec/09_client.md # x/slashing/spec/README.md # x/staking/spec/09_client.md # x/upgrade/spec/04_client.md --- CODING_GUIDELINES.md | 89 + CONTRIBUTING.md | 75 + STABLE_RELEASES.md | 72 + contrib/rosetta/README.md | 8 + cosmovisor/README.md | 76 + crypto/keyring/keyring.go | 4 + crypto/keys/multisig/amino.go | 2 +- crypto/keys/secp256k1/secp256k1_nocgo.go | 1 + crypto/ledger/ledger_notavail.go | 2 + db/README.md | 72 + docs/404.md | 47 + docs/DOCS_README.md | 19 + .../adr-010-modular-antehandler.md | 4 - .../adr-022-custom-panic-handling.md | 4 - docs/architecture/adr-038-state-listening.md | 43 + ...r-040-storage-and-smt-state-commitments.md | 90 + docs/architecture/adr-043-nft-module.md | 340 +++ .../adr-044-protobuf-updates-guidelines.md | 109 + docs/architecture/adr-046-module-params.md | 184 ++ docs/migrations/pre-upgrade.md | 55 + docs/migrations/rest.md | 4 + docs/ru/README.md | 3 + docs/run-node/rosetta.md | 55 + docs/run-node/run-node.md | 23 + docs/run-node/run-testnet.md | 99 + go.mod | 73 + scripts/module-tests.sh | 48 + server/rosetta/client_online.go | 2 +- server/rosetta/lib/internal/service/online.go | 2 +- simapp/simd/cmd/genaccounts.go | 1 - snapshots/README.md | 236 ++ store/streaming/README.md | 67 + store/streaming/file/README.md | 66 + store/v2/flat/store.go | 479 ++++ store/v2/smt/store.go | 99 + types/denom.go | 2 +- x/auth/ante/sigverify.go | 9 + x/auth/middleware/basic.go | 358 +++ x/auth/spec/01_concepts.md | 10 + x/auth/spec/05_vesting.md | 5 + x/auth/spec/07_client.md | 421 ++++ x/authz/spec/05_client.md | 172 ++ x/bank/spec/README.md | 6 + x/crisis/spec/05_client.md | 31 + x/distribution/legacy/v043/helpers.go | 2 +- x/distribution/spec/README.md | 6 + x/epoching/keeper/keeper.go | 192 ++ x/epoching/spec/03_to_improve.md | 44 + x/evidence/spec/07_client.md | 188 ++ x/feegrant/spec/README.md | 6 + x/gov/spec/01_concepts.md | 7 + x/gov/spec/07_client.md | 1060 +++++++++ x/group/internal/orm/spec/01_table.md | 40 + x/mint/spec/06_client.md | 224 ++ x/slashing/spec/09_client.md | 294 +++ x/slashing/spec/README.md | 7 + x/staking/spec/09_client.md | 2088 +++++++++++++++++ x/upgrade/spec/04_client.md | 459 ++++ 58 files changed, 8170 insertions(+), 14 deletions(-) create mode 100644 CODING_GUIDELINES.md create mode 100644 db/README.md create mode 100644 docs/404.md create mode 100644 docs/architecture/adr-043-nft-module.md create mode 100644 docs/architecture/adr-044-protobuf-updates-guidelines.md create mode 100644 docs/architecture/adr-046-module-params.md create mode 100644 docs/migrations/pre-upgrade.md create mode 100755 docs/ru/README.md create mode 100644 docs/run-node/run-testnet.md create mode 100644 scripts/module-tests.sh create mode 100644 snapshots/README.md create mode 100644 store/streaming/README.md create mode 100644 store/streaming/file/README.md create mode 100644 store/v2/flat/store.go create mode 100644 store/v2/smt/store.go create mode 100644 x/auth/middleware/basic.go create mode 100644 x/auth/spec/07_client.md create mode 100644 x/authz/spec/05_client.md create mode 100644 x/crisis/spec/05_client.md create mode 100644 x/epoching/keeper/keeper.go create mode 100644 x/epoching/spec/03_to_improve.md create mode 100644 x/evidence/spec/07_client.md create mode 100644 x/gov/spec/07_client.md create mode 100644 x/group/internal/orm/spec/01_table.md create mode 100644 x/mint/spec/06_client.md create mode 100644 x/slashing/spec/09_client.md create mode 100644 x/staking/spec/09_client.md create mode 100644 x/upgrade/spec/04_client.md diff --git a/CODING_GUIDELINES.md b/CODING_GUIDELINES.md new file mode 100644 index 000000000000..3ea8d20556d4 --- /dev/null +++ b/CODING_GUIDELINES.md @@ -0,0 +1,89 @@ +# Coding Guidelines + +This document is an extension to [CONTRIBUTING](./CONTRIBUTING.md) and provides more details about the coding guidelines and requirements. + +## API & Design + ++ Code must be well structured: + + packages must have a limited responsibility (different concerns can go to different packages), + + types must be easy to compose, + + think about maintainbility and testability. ++ "Depend upon abstractions, [not] concretions". ++ Try to limit the number of methods you are exposing. It's easier to expose something later than to hide it. ++ Take advantage of `internal` package concept. ++ Follow agreed-upon design patterns and naming conventions. ++ publicly-exposed functions are named logically, have forward-thinking arguments and return types. ++ Avoid global variables and global configurators. ++ Favor composable and extensible designs. ++ Minimize code duplication. ++ Limit third-party dependencies. + +Performance: + ++ Avoid unnecessary operations or memory allocations. + +Security: + ++ Pay proper attention to exploits involving: + + gas usage + + transaction verification and signatures + + malleability + + code must be always deterministic ++ Thread safety. If some functionality is not thread-safe, or uses something that is not thread-safe, then clearly indicate the risk on each level. + +## Testing + +Make sure your code is well tested: + ++ Provide unit tests for every unit of your code if possible. Unit tests are expected to comprise 70%-80% of your tests. ++ Describe the test scenarios you are implementing for integration tests. ++ Create integration tests for queries and msgs. ++ Use both test cases and property / fuzzy testing. We use the [rapid](pgregory.net/rapid) Go library for property-based and fuzzy testing. ++ Do not decrease code test coverage. Explain in a PR if test coverage is decreased. + +We expect tests to use `require` or `assert` rather than `t.Skip` or `t.Fail`, +unless there is a reason to do otherwise. +When testing a function under a variety of different inputs, we prefer to use +[table driven tests](https://github.com/golang/go/wiki/TableDrivenTests). +Table driven test error messages should follow the following format +`, tc #, i #`. +`` is an optional short description of whats failing, `tc` is the +index within the test case table that is failing, and `i` is when there +is a loop, exactly which iteration of the loop failed. +The idea is you should be able to see the +error message and figure out exactly what failed. +Here is an example check: + +```go + +for tcIndex, tc := range cases { + + resp, err := doSomething() + require.NoError(err) + require.Equal(t, tc.expected, resp, "should correctly perform X") +``` + +## Quality Assurance + +We are forming a QA team that will support the core Cosmos SDK team and collaborators by: + +- Improving the Cosmos SDK QA Processes +- Improving automation in QA and testing +- Defining high-quality metrics +- Maintaining and improving testing frameworks (unit tests, integration tests, and functional tests) +- Defining test scenarios. +- Verifying user experience and defining a high quality. + - We want to have **acceptance tests**! Document and list acceptance lists that are implemented and identify acceptance tests that are still missing. + - Acceptance tests should be specified in `acceptance-tests` directory as Markdown files. +- Supporting other teams with testing frameworks, automation, and User Experience testing. +- Testing chain upgrades for every new breaking change. + - Defining automated tests that assure data integrity after an update. + +Desired outcomes: + +- QA team works with Development Team. +- QA is happening in parallel with Core Cosmos SDK development. +- Releases are more predictable. +- QA reports. Goal is to guide with new tasks and be one of the QA measures. + +As a developer, you must help the QA team by providing instructions for User Experience (UX) and functional testing. diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 788c58d6f68e..d141be660795 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -26,15 +26,30 @@ Contributing to this repo can mean many things such as participating in discussion or proposing code changes. To ensure a smooth workflow for all contributors, the general procedure for contributing has been established: +<<<<<<< HEAD 1. Either [open](https://github.com/cosmos/cosmos-sdk/issues/new/choose) or [find](https://github.com/cosmos/cosmos-sdk/issues) an issue you'd like to help with 2. Participate in thoughtful discussion on that issue 3. If you would like to contribute: 1. If the issue is a proposal, ensure that the proposal has been accepted +======= +1. Start by browsing [new issues](https://github.com/cosmos/cosmos-sdk/issues) and [discussions](https://github.com/cosmos/cosmos-sdk/discussions). If you are looking for something interesting or if you have something in your mind, there is a chance it was has been discussed. + +- Looking for a good place to start contributing? How about checking out some [good first issues](https://github.com/cosmos/cosmos-sdk/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)? + +2. Determine whether a GitHub issue or discussion is more appropriate for your needs: +1. If want to propose something new that requires specification or an additional design, or you would like to change a process, start with a [new discussion](https://github.com/cosmos/cosmos-sdk/discussions/new). With discussions, we can better handle the design process using discussion threads. A discussion usually leads to one or more issues. +2. If the issue you want addressed is a specific proposal or a bug, then open a [new issue](https://github.com/cosmos/cosmos-sdk/issues/new/choose). +3. Review existing [issues](https://github.com/cosmos/cosmos-sdk/issues) to find an issue you'd like to help with. +3. Participate in thoughtful discussion on that issue. +4. If you would like to contribute: + 1. Ensure that the proposal has been accepted. +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) 2. Ensure that nobody else has already begun working on this issue. If they have, make sure to contact them to collaborate 3. If nobody has been assigned for the issue and you would like to work on it, make a comment on the issue to inform the community of your intentions +<<<<<<< HEAD to begin work 4. Follow standard GitHub best practices: fork the repo, branch from the HEAD of `master`, make some commits, and submit a PR to `master` @@ -49,10 +64,17 @@ contributors, the general procedure for contributing has been established: of `CHANGELOG.md` (see file for log format) Note that for very small or blatantly obvious problems (such as typos) it is +======= + to begin work. +5. To submit your work as a contribution to the repository follow standard GitHub best practices. See [pull request guideline](#pull-requests) below. + +**Note:** For very small or blatantly obvious problems such as typos, you are +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) not required to an open issue to submit a PR, but be aware that for more complex problems/features, if a PR is opened before an adequate design discussion has taken place in a GitHub issue, that PR runs a high likelihood of being rejected. +<<<<<<< HEAD Other notes: - Looking for a good place to start contributing? How about checking out some @@ -60,17 +82,70 @@ Other notes: - Please make sure to run `make format` before every commit - the easiest way to do this is have your editor run it for you upon saving a file. Additionally please ensure that your code is lint compliant by running `make lint-fix`. +======= +## Teams Dev Calls + +The Cosmos SDK has many stakeholders contributing and shaping the project. Regen Network Development leads the Cosmos SDK R&D, and welcomes long-term contributors and additional maintainers from other projects. We use self-organizing principles to coordinate and collaborate across organizations in structured "Working Groups" that focus on specific problem domains or architectural components of the Cosmos SDK. + +The developers are organized in working groups which are listed on a ["Working Groups & Arch Process" Github Issue](https://github.com/cosmos/cosmos-sdk/issues/9058) (pinned at the top of the [issues list](https://github.com/cosmos/cosmos-sdk/issues)). + +The important development announcements are shared on [Discord](https://discord.com/invite/cosmosnetwork) in the \#dev-announcements channel. + +To synchronize we have few major meetings: + ++ Architecture calls: bi-weekly on Fridays at 14:00 UTC (alternating with the grooming meeting below). ++ Grooming / Planning: bi-weekly on Fridays at 14:00 UTC (alternating with the architecture meeting above). ++ Cosmos Community SDK Development Call on the last Wednesday of every month at 17:00 UTC. ++ Cosmos Roadmap Prioritization every 4 weeks on Tuesday at 15:00 UTC (limited participation). + +If you would like to join one of those calls, then please contact us on [Discord](https://discord.com/invite/cosmosnetwork) or reach out directly to Cory Levinson from Regen Network (cory@regen.network). + +## Architecture Decision Records (ADR) + +When proposing an architecture decision for the Cosmos SDK, please start by opening an [issue](https://github.com/cosmos/cosmos-sdk/issues/new/choose) or a [discussion](https://github.com/cosmos/cosmos-sdk/discussions/new) with a summary of the proposal. Once the proposal has been discussed and there is rough alignment on a high-level approach to the design, the [ADR creation process](https://github.com/cosmos/cosmos-sdk/blob/master/docs/architecture/PROCESS.md) can begin. We are following this process to ensure all involved parties are in agreement before any party begins coding the proposed implementation. If you would like to see examples of how these are written, please refer to the current [ADRs](https://github.com/cosmos/cosmos-sdk/tree/master/docs/architecture). + +## Development Procedure + +- The latest state of development is on `master`. +- `master` must never fail `make lint test test-race`. +- No `--force` onto `master` (except when reverting a broken commit, which should seldom happen). +- Create a branch to start a wok: + - Fork the repo (core developers must create a branch directly in the Cosmos SDK repo), + branch from the HEAD of `master`, make some commits, and submit a PR to `master`. + - For core developers working within the `cosmos-sdk` repo, follow branch name conventions to ensure a clear + ownership of branches: `{moniker}/{issue#}-branch-name`. + - See [Branching Model](#branching-model-and-release) for more details. +- Be sure to run `make format` before every commit. The easiest way + to do this is have your editor run it for you upon saving a file (most of the editors + will do it anyway using a pre-configured setup of the programming language mode). + Additionally, be sure that your code is lint compliant by running `make lint-fix`. +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) A convenience git `pre-commit` hook that runs the formatters automatically before each commit is available in the `contrib/githooks/` directory. ## Architecture Decision Records (ADR) +<<<<<<< HEAD When proposing an architecture decision for the SDK, please create an [ADR](./docs/architecture/README.md) so further discussions can be made. We are following this process so all involved parties are in agreement before any party begins coding the proposed implementation. If you would like to see some examples of how these are written refer to the current [ADRs](https://github.com/cosmos/cosmos-sdk/tree/master/docs/architecture). ## Pull Requests +======= +Before submitting a pull request: + +- merge the latest master `git merge origin/master`, +- run `make lint test` to ensure that all checks and tests pass. + +Then: + +1. If you have something to show, **start with a `Draft` PR**. It's good to have early validation of your work and we highly recommend this practice. A Draft PR also indicates to the community that the work is in progress. + Draft PRs also helps the core team provide early feedback and ensure the work is in the right direction. +2. When the code is complete, change your PR from `Draft` to `Ready for Review`. +3. Go through the actions for each checkbox present in the PR template description. The PR actions are automatically provided for each new PR. +4. Be sure to include a relevant changelog entry in the `Unreleased` section of `CHANGELOG.md` (see file for log format). +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) PRs should be categorically broken up based on the type of changes being made (i.e. `fix`, `feat`, `refactor`, `docs`, etc.). The *type* must be included in the PR title as a prefix (e.g. diff --git a/STABLE_RELEASES.md b/STABLE_RELEASES.md index 0e1509dbd18c..55fd004415e3 100644 --- a/STABLE_RELEASES.md +++ b/STABLE_RELEASES.md @@ -2,13 +2,85 @@ *Stable Release Series* continue to receive bug fixes until they reach **End Of Life**. +<<<<<<< HEAD:STABLE_RELEASES.md Only the following release series are currently supported and receive bug fixes: +======= +## Major Release Procedure + +A _major release_ is an increment of the first number (eg: `v1.2` → `v2.0.0`) or the _point number_ (eg: `v1.1 → v1.2.0`, also called _point release_). Each major release opens a _stable release series_ and receives updates outlined in the [Major Release Maintenance](#major-release-maintenance)_section. + +Before making a new _major_ release we do beta and release candidate releases. For example, for release 1.0.0: + +``` +v1.0.0-beta1 → v1.0.0-beta2 → ... → v1.0.0-rc1 → v1.0.0-rc2 → ... → v1.0.0 +``` + +- Release a first beta version on the `master` branch and freeze `master` from receiving any new features. After beta is released, we focus on releasing the release candidate: + - finish audits and reviews + - kick off a large round of simulation testing (e.g. 400 seeds for 2k blocks) + - perform functional tests + - add more tests + - release new beta version as the bugs are discovered and fixed. +- After the team feels that the `master` works fine we create a `release/vY` branch (going forward known a release branch), where `Y` is the version number, with the patch part substituted to `x` (eg: 0.42.x, 1.0.x). Ensure the release branch is protected so that pushes against the release branch are permitted only by the release manager or release coordinator. + - **PRs targeting this branch can be merged _only_ when exceptional circumstances arise** + - update the GitHub mergify integration by adding instructions for automatically backporting commits from `master` to the `release/vY` using the `backport/Y` label. +- In the release branch, prepare a new version section in the `CHANGELOG.md` + - All links must be link-ified: `$ python ./scripts/linkify_changelog.py CHANGELOG.md` + - Copy the entries into a `RELEASE_CHANGELOG.md`, this is needed so the bot knows which entries to add to the release page on GitHub. +- Create a new annotated git tag for a release candidate (eg: `git tag -a v1.1.0-rc1`) in the release branch. + - from this point we unfreeze master. + - the SDK teams collaborate and do their best to run testnets in order to validate the release. + - when bugs are found, create a PR for `master`, and backport fixes to the release branch. + - create new release candidate tags after bugs are fixed. +- After the team feels the release branch is stable and everything works, create a full release: + - update `CHANGELOG.md`. + - create a new annotated git tag (eg `git -a v1.1.0`) in the release branch. + - Create a GitHub release. + +Following _semver_ philosophy, point releases after `v1.0`: + +- must not break API +- can break consensus + +Before `v1.0`, point release can break both point API and consensus. + +## Patch Release Procedure + +A _patch release_ is an increment of the patch number (eg: `v1.2.0` → `v1.2.1`). + +**Patch release must not break API nor consensus.** + +Updates to the release branch should come from `master` by backporting PRs (usually done by automatic cherry pick followed by a PRs to the release branch). The backports must be marked using `backport/Y` label in PR for master. +It is the PR author's responsibility to fix merge conflicts, update changelog entries, and +ensure CI passes. If a PR originates from an external contributor, a core team member assumes +responsibility to perform this process instead of the original author. +Lastly, it is core team's responsibility to ensure that the PR meets all the SRU criteria. + +Point Release must follow the [Stable Release Policy](#stable-release-policy). + +After the release branch has all commits required for the next patch release: + +- update `CHANGELOG.md`. +- create a new annotated git tag (eg `git -a v1.1.0`) in the release branch. +- Create a GitHub release. + +## Major Release Maintenance + +Major Release series continue to receive bug fixes (released as a Patch Release) until they reach **End Of Life**. +Major Release series is maintained in compliance with the **Stable Release Policy** as described in this document. +Note: not every Major Release is denoted as stable releases. + +Only the following major release series have a stable release status: +>>>>>>> 479485f95 (style: lint go and markdown (#10060)):RELEASE_PROCESS.md * **0.42 «Stargate»** will be supported until 6 months after **0.43.0** is published. A fairly strict **bugfix-only** rule applies to pull requests that are requested to be included into a stable point-release. * **0.43 «Stargate»** is the latest stable release. +<<<<<<< HEAD:STABLE_RELEASES.md The **0.43 «Stargate»** release series is maintained in compliance with the **Stable Release Policy** as described in this document. +======= +>>>>>>> 479485f95 (style: lint go and markdown (#10060)):RELEASE_PROCESS.md ## Stable Release Policy This policy presently applies *only* to the following release series: diff --git a/contrib/rosetta/README.md b/contrib/rosetta/README.md index f131c843b8ee..a05446ea94f1 100644 --- a/contrib/rosetta/README.md +++ b/contrib/rosetta/README.md @@ -17,7 +17,15 @@ Contains the required files to set up rosetta cli and make it work against its w ## node +<<<<<<< HEAD Contains the files for a deterministic network, with fixed keys and some actions on there, to test parsing of msgs and historical balances. +======= +Contains the files for a deterministic network, with fixed keys and some actions on there, to test parsing of msgs and historical balances. This image is used to run a simapp node and to run the rosetta server. + +## Rosetta-cli + +The docker image for ./rosetta-cli/Dockerfile is on [docker hub](https://hub.docker.com/r/tendermintdev/rosetta-cli). Whenever rosetta-cli releases a new version, rosetta-cli/Dockerfile should be updated to reflect the new version and pushed to docker hub. +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) ## Notes diff --git a/cosmovisor/README.md b/cosmovisor/README.md index e263966a49a0..07c63d73fb1c 100644 --- a/cosmovisor/README.md +++ b/cosmovisor/README.md @@ -1,12 +1,57 @@ # Cosmosvisor Quick Start +<<<<<<< HEAD `cosmovisor` is a small process manager for Cosmos SDK application binaries that monitors the governance module via stdout for incoming chain upgrade proposals. If it sees a proposal that gets approved, `cosmovisor` can automatically download the new binary, stop the current binary, switch from the old binary to the new one, and finally restart the node with the new binary. +======= +`cosmovisor` is a small process manager for Cosmos SDK application binaries that monitors the governance module for incoming chain upgrade proposals. If it sees a proposal that gets approved, `cosmovisor` can automatically download the new binary, stop the current binary, switch from the old binary to the new one, and finally restart the node with the new binary. + +#### Design + +Cosmovisor is designed to be used as a wrapper for a `Cosmos SDK` app: + +* it will pass arguments to the associated app (configured by `DAEMON_NAME` env variable). + Running `cosmovisor run arg1 arg2 ....` will run `app arg1 arg2 ...`; +* it will manage an app by restarting and upgrading if needed; +* it is configured using environment variables, not positional arguments. +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) *Note: If new versions of the application are not set up to run in-place store migrations, migrations will need to be run manually before restarting `cosmovisor` with the new binary. For this reason, we recommend applications adopt in-place store migrations.* ## Installation +<<<<<<< HEAD To install `cosmovisor`, run the following command: +======= +## Setup + +### Installation + +To install the latest version of `cosmovisor`, run the following command: + +``` +go install github.com/cosmos/cosmos-sdk/cosmovisor/cmd/cosmovisor@latest +``` + +To install a previous version, you can specify the version. IMPORTANT: Chains that use Cosmos-SDK v0.42.x and want to use auto-download feature MUST use Cosmovisor v0.1.0 + +``` +go install github.com/cosmos/cosmos-sdk/cosmovisor/cmd/cosmovisor@v0.1.0 +``` + +It is possible to confirm the version of cosmovisor when using Cosmovisor v1.0.0, but it is not possible to do so with `v0.1.0`. + +You can also install from source by pulling the cosmos-sdk repository and switching to the correct version and building as follows: + +``` +git clone git@github.com:cosmos/cosmos-sdk +cd cosmos-sdk +git checkout cosmovisor/vx.x.x +cd cosmovisor +make +``` + +This will build cosmovisor in your current directory. Afterwards you may want to put it into your machine's PATH like as follows: +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) ``` go get github.com/cosmos/cosmos-sdk/cosmovisor/cmd/cosmovisor @@ -14,7 +59,21 @@ go get github.com/cosmos/cosmos-sdk/cosmovisor/cmd/cosmovisor ## Command Line Arguments And Environment Variables +<<<<<<< HEAD All arguments passed to `cosmovisor` will be passed to the application binary (as a subprocess). `cosmovisor` will return `/dev/stdout` and `/dev/stderr` of the subprocess as its own. For this reason, `cosmovisor` cannot accept any command-line arguments other than those available to the application binary, nor will it print anything to output other than what is printed by the application binary. +======= +### Command Line Arguments And Environment Variables + +The first argument passed to `cosmovisor` is the action for `cosmovisor` to take. Options are: + +* `help`, `--help`, or `-h` - Output `cosmovisor` help information and check your `cosmovisor` configuration. +* `run` - Run the configured binary using the rest of the provided arguments. +* `version`, or `--version` - Output the `cosmovisor` version and also run the binary with the `version` argument. + +All arguments passed to `cosmovisor run` will be passed to the application binary (as a subprocess). `cosmovisor` will return `/dev/stdout` and `/dev/stderr` of the subprocess as its own. For this reason, `cosmovisor run` cannot accept any command-line arguments other than those available to the application binary. + +*Note: Use of `cosmovisor` without one of the action arguments is deprecated. For backwards compatability, if the first argument is not an action argument, `run` is assumed. However, this fallback might be removed in future versions, so it is recommended that you always provide `run`. +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) `cosmovisor` reads its configuration from environment variables: @@ -66,7 +125,24 @@ In order to support downloadable binaries, a tarball for each upgrade binary wil The `DAEMON` specific code and operations (e.g. tendermint config, the application db, syncing blocks, etc.) all work as expected. The application binaries' directives such as command-line flags and environment variables also work as expected. +<<<<<<< HEAD ## Auto-Download +======= +### Detecting Upgrades + +`cosmovisor` is polling the `$DAEMON_HOME/data/upgrade-info.json` file for new upgrade instructions. The file is created by the x/upgrade module in `BeginBlocker` when an upgrade is detected and the blockchain reaches the upgrade height. +The following heuristic is applied to detect the upgrade: + ++ When starting, `cosmovisor` doesn't know much about currently running upgrade, except the binary which is `current/bin/`. It tries to read the `current/update-info.json` file to get information about the current upgrade name. ++ If neither `cosmovisor/current/upgrade-info.json` nor `data/upgrade-info.json` exist, then `cosmovisor` will wait for `data/upgrade-info.json` file to trigger an upgrade. ++ If `cosmovisor/current/upgrade-info.json` doesn't exist but `data/upgrade-info.json` exists, then `cosmovisor` assumes that whatever is in `data/upgrade-info.json` is a valid upgrade request. In this case `cosmovisor` tries immediately to make an upgrade according to the `name` attribute in `data/upgrade-info.json`. ++ Otherwise, `cosmovisor` waits for changes in `upgrade-info.json`. As soon as a new upgrade name is recorded in the file, `cosmovisor` will trigger an upgrade mechanism. + +When the upgrade mechanism is triggered, `cosmovisor` will: + +1. if `DAEMON_ALLOW_DOWNLOAD_BINARIES` is enabled, start by auto-downloading a new binary into `cosmovisor//bin` (where `` is the `upgrade-info.json:name` attribute); +2. update the `current` symbolic link to point to the new directory and save `data/upgrade-info.json` to `cosmovisor/current/upgrade-info.json`. +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) Generally, `cosmovisor` requires that the system administrator place all relevant binaries on disk before the upgrade happens. However, for people who don't need such control and want an easier setup (maybe they are syncing a non-validating fullnode and want to do little maintenance), there is another option. diff --git a/crypto/keyring/keyring.go b/crypto/keyring/keyring.go index f96c8635243c..ab68f92fe612 100644 --- a/crypto/keyring/keyring.go +++ b/crypto/keyring/keyring.go @@ -475,6 +475,10 @@ func (ks keystore) List() ([]Info, error) { return nil, err } +<<<<<<< HEAD +======= + var res []*Record //nolint:prealloc +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) sort.Strings(keys) for _, key := range keys { diff --git a/crypto/keys/multisig/amino.go b/crypto/keys/multisig/amino.go index 4849a23173d2..3492f0daa66c 100644 --- a/crypto/keys/multisig/amino.go +++ b/crypto/keys/multisig/amino.go @@ -64,7 +64,7 @@ func tmToProto(tmPk tmMultisig) (*LegacyAminoPubKey, error) { } // MarshalAminoJSON overrides amino JSON unmarshaling. -func (m LegacyAminoPubKey) MarshalAminoJSON() (tmMultisig, error) { //nolint:golint +func (m LegacyAminoPubKey) MarshalAminoJSON() (tmMultisig, error) { //nolint:revive return protoToTm(&m) } diff --git a/crypto/keys/secp256k1/secp256k1_nocgo.go b/crypto/keys/secp256k1/secp256k1_nocgo.go index 2d605447f421..26735b44229b 100644 --- a/crypto/keys/secp256k1/secp256k1_nocgo.go +++ b/crypto/keys/secp256k1/secp256k1_nocgo.go @@ -1,3 +1,4 @@ +//go:build !libsecp256k1 // +build !libsecp256k1 package secp256k1 diff --git a/crypto/ledger/ledger_notavail.go b/crypto/ledger/ledger_notavail.go index 66d16adcc023..578c33d4369c 100644 --- a/crypto/ledger/ledger_notavail.go +++ b/crypto/ledger/ledger_notavail.go @@ -1,4 +1,6 @@ +//go:build !cgo || !ledger // +build !cgo !ledger + // test_ledger_mock package ledger diff --git a/db/README.md b/db/README.md new file mode 100644 index 000000000000..01471f144c61 --- /dev/null +++ b/db/README.md @@ -0,0 +1,72 @@ +# Key-Value Database + +Databases supporting mappings of arbitrary byte sequences. + +## Interfaces + +The database interface types consist of objects to encapsulate the singular connection to the DB, transactions being made to it, historical version state, and iteration. + +### `DBConnection` + +This interface represents a connection to a versioned key-value database. All versioning operations are performed using methods on this type. + +* The `Versions` method returns a `VersionSet` which represents an immutable view of the version history at the current state. +* Version history is modified via the `{Save,Delete}Version` methods. +* Operations on version history do not modify any database contents. + +### `DBReader`, `DBWriter`, and `DBReadWriter` + +These types represent transactions on the database contents. Their methods provide CRUD operations as well as iteration. + +* Writeable transactions call `Commit` flushes operations to the source DB. +* All open transactions must be closed with `Discard` or `Commit` before a new version can be saved on the source DB. +* The maximum number of safely concurrent transactions is dependent on the backend implementation. +* A single transaction object is not safe for concurrent use. +* Write conflicts on concurrent transactions will cause an error at commit time (optimistic concurrency control). + +#### `Iterator` + +* An iterator is invalidated by any writes within its `Domain` to the source transaction while it is open. +* An iterator must call `Close` before its source transaction is closed. + +### `VersionSet` + +This represents a self-contained and immutable view of a database's version history state. It is therefore safe to retain and conccurently access any instance of this object. + +## Implementations + +### In-memory DB + +The in-memory DB in the `db/memdb` package cannot be persisted to disk. It is implemented using the Google [btree](https://pkg.go.dev/github.com/google/btree) library. + +* This currently does not perform write conflict detection, so it only supports a single open write-transaction at a time. Multiple and concurrent read-transactions are supported. + +### BadgerDB + +A [BadgerDB](https://pkg.go.dev/github.com/dgraph-io/badger/v3)-based backend. Internally, this uses BadgerDB's ["managed" mode](https://pkg.go.dev/github.com/dgraph-io/badger/v3#OpenManaged) for version management. +Note that Badger only recognizes write conflicts for rows that are read _after_ a conflicting transaction was opened. In other words, the following will raise an error: + +```go +tx1, tx2 := db.Writer(), db.ReadWriter() +key := []byte("key") +tx2.Get(key) +tx1.Set(key, []byte("a")) +tx2.Set(key, []byte("b")) +tx1.Commit() // ok +err := tx2.Commit() // err is non-nil +``` + +But this will not: + +```go +tx1, tx2 := db.Writer(), db.ReadWriter() +key := []byte("key") +tx1.Set(key, []byte("a")) +tx2.Set(key, []byte("b")) +tx1.Commit() // ok +tx2.Commit() // ok +``` + +### RocksDB + +A [RocksDB](https://github.com/facebook/rocksdb)-based backend. Internally this uses [`OptimisticTransactionDB`](https://github.com/facebook/rocksdb/wiki/Transactions#optimistictransactiondb) to allow concurrent transactions with write conflict detection. Historical versioning is internally implemented with [Checkpoints](https://github.com/facebook/rocksdb/wiki/Checkpoints). diff --git a/docs/404.md b/docs/404.md new file mode 100644 index 000000000000..d7b8b16782fa --- /dev/null +++ b/docs/404.md @@ -0,0 +1,47 @@ + + +# 404 - Lost in space, this is just an empty void diff --git a/docs/DOCS_README.md b/docs/DOCS_README.md index 2e080fc53dbc..a2e8da3ce89b 100644 --- a/docs/DOCS_README.md +++ b/docs/DOCS_README.md @@ -1,5 +1,6 @@ # Updating the docs +<<<<<<< HEAD If you want to open a PR on the Cosmos SDK to update the documentation, please follow the guidelines in the [`CONTRIBUTING.md`](https://github.com/cosmos/cosmos-sdk/tree/master/CONTRIBUTING.md#updating-documentation) ## Translating @@ -8,6 +9,24 @@ If you want to open a PR on the Cosmos SDK to update the documentation, please f - Always translate content living on `master`. - Only content under `/docs/intro/`, `/docs/basics/`, `/docs/core/`, `/docs/building-modules/` and `docs/run-node/` needs to be translated, as well as `docs/README.md`. It is also nice (but not mandatory) to translate `/docs/spec/`. - Specify the release/tag of the translation in the README of your translation folder. Update the release/tag each time you update the translation. +======= +If you want to open a PR in Cosmos SDK to update the documentation, please follow the guidelines in [`CONTRIBUTING.md`](https://github.com/cosmos/cosmos-sdk/tree/master/CONTRIBUTING.md#updating-documentation). + +## Internationalization + +- Translations for documentation live in a `docs//` folder, where `` is the language code for a specific language. For example, `zh` for Chinese, `ko` for Korean, `ru` for Russian, etc. +- Each `docs//` folder must follow the same folder structure within `docs/`, but only content in the following folders needs to be translated and included in the respective `docs//` folder: + - `docs/basics/` + - `docs/building-modules/` + - `docs/core/` + - `docs/ibc/` + - `docs/intro/` + - `docs/migrations/` + - `docs/run-node/` +- Each `docs//` folder must also have a `README.md` that includes a translated version of both the layout and content within the root-level [`README.md`](https://github.com/cosmos/cosmos-sdk/tree/master/docs/README.md). The layout defined in the `README.md` is used to build the homepage. +- Always translate content living on `master` unless you are revising documentation for a specific release. Translated documentation like the root-level documentation is semantically versioned. +- For additional configuration options, please see [VuePress Internationalization](https://vuepress.vuejs.org/guide/i18n.html). +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) ## Docs Build Workflow diff --git a/docs/architecture/adr-010-modular-antehandler.md b/docs/architecture/adr-010-modular-antehandler.md index eb5e8145de33..15d28dbeebff 100644 --- a/docs/architecture/adr-010-modular-antehandler.md +++ b/docs/architecture/adr-010-modular-antehandler.md @@ -273,10 +273,6 @@ Cons: 1. Decorator pattern may have a deeply nested structure that is hard to understand, this is mitigated by having the decorator order explicitly listed in the `ChainAnteDecorators` function. 2. Does not make use of the ModuleManager design. Since this is already being used for BeginBlocker/EndBlocker, this proposal seems unaligned with that design pattern. -## Status - -> Accepted Simple Decorators approach - ## Consequences Since pros and cons are written for each approach, it is omitted from this section diff --git a/docs/architecture/adr-022-custom-panic-handling.md b/docs/architecture/adr-022-custom-panic-handling.md index 228adeef2877..034f2e7344b9 100644 --- a/docs/architecture/adr-022-custom-panic-handling.md +++ b/docs/architecture/adr-022-custom-panic-handling.md @@ -187,10 +187,6 @@ func (app *BaseApp) AddRunTxRecoveryHandler(handlers ...RecoveryHandler) { This method would prepend handlers to an existing chain. -## Status - -Proposed - ## Consequences ### Positive diff --git a/docs/architecture/adr-038-state-listening.md b/docs/architecture/adr-038-state-listening.md index 9bc644dddb26..0d32eac126f3 100644 --- a/docs/architecture/adr-038-state-listening.md +++ b/docs/architecture/adr-038-state-listening.md @@ -207,18 +207,40 @@ func (rs *Store) CacheMultiStore() types.CacheMultiStore { We will introduce a new `StreamingService` interface for exposing `WriteListener` data streams to external consumers. ```go +<<<<<<< HEAD // Hook interface used to hook into the ABCI message processing of the BaseApp type Hook interface { ListenBeginBlock(ctx sdk.Context, req abci.RequestBeginBlock, res abci.ResponseBeginBlock) // update the streaming service with the latest BeginBlock messages ListenEndBlock(ctx sdk.Context, req abci.RequestEndBlock, res abci.ResponseEndBlock) // update the steaming service with the latest EndBlock messages ListenDeliverTx(ctx sdk.Context, req abci.RequestDeliverTx, res abci.ResponseDeliverTx) // update the steaming service with the latest DeliverTx messages +======= +// ABCIListener interface used to hook into the ABCI message processing of the BaseApp +type ABCIListener interface { + // ListenBeginBlock updates the streaming service with the latest BeginBlock messages + ListenBeginBlock(ctx types.Context, req abci.RequestBeginBlock, res abci.ResponseBeginBlock) error + // ListenEndBlock updates the steaming service with the latest EndBlock messages + ListenEndBlock(ctx types.Context, req abci.RequestEndBlock, res abci.ResponseEndBlock) error + // ListenDeliverTx updates the steaming service with the latest DeliverTx messages + ListenDeliverTx(ctx types.Context, req abci.RequestDeliverTx, res abci.ResponseDeliverTx) error +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) } // StreamingService interface for registering WriteListeners with the BaseApp and updating the service with the ABCI messages using the hooks type StreamingService interface { +<<<<<<< HEAD Stream(wg *sync.WaitGroup, quitChan <-chan struct{}) // streaming service loop, awaits kv pairs and writes them to some destination stream or file Listeners() map[sdk.StoreKey][]storeTypes.WriteListener // returns the streaming service's listeners for the BaseApp to register Hook +======= + // Stream is the streaming service loop, awaits kv pairs and writes them to some destination stream or file + Stream(wg *sync.WaitGroup) error + // Listeners returns the streaming service's listeners for the BaseApp to register + Listeners() map[types.StoreKey][]store.WriteListener + // ABCIListener interface for hooking into the ABCI messages from inside the BaseApp + ABCIListener + // Closer interface + io.Closer +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) } ``` @@ -563,6 +585,7 @@ func NewSimApp( // configure state listening capabilities using AppOptions listeners := cast.ToStringSlice(appOpts.Get("store.streamers")) for _, listenerName := range listeners { +<<<<<<< HEAD // get the store keys allowed to be exposed for this streaming service/state listeners exposeKeyStrs := cast.ToStringSlice(appOpts.Get(fmt.Sprintf("streamers.%s.keys", listenerName)) exposeStoreKeys = make([]storeTypes.StoreKey, 0, len(exposeKeyStrs)) @@ -570,6 +593,26 @@ func NewSimApp( if storeKey, ok := keys[keyStr]; ok { exposeStoreKeys = append(exposeStoreKeys, storeKey) } +======= + // get the store keys allowed to be exposed for this streaming service + exposeKeyStrs := cast.ToStringSlice(appOpts.Get(fmt.Sprintf("streamers.%s.keys", streamerName))) + var exposeStoreKeys []sdk.StoreKey + if exposeAll(exposeKeyStrs) { // if list contains `*`, expose all StoreKeys + exposeStoreKeys = make([]sdk.StoreKey, 0, len(keys)) + for _, storeKey := range keys { + exposeStoreKeys = append(exposeStoreKeys, storeKey) + } + } else { + exposeStoreKeys = make([]sdk.StoreKey, 0, len(exposeKeyStrs)) + for _, keyStr := range exposeKeyStrs { + if storeKey, ok := keys[keyStr]; ok { + exposeStoreKeys = append(exposeStoreKeys, storeKey) + } + } + } + if len(exposeStoreKeys) == 0 { // short circuit if we are not exposing anything + continue +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) } // get the constructor for this listener name constructor, err := baseapp.NewStreamingServiceConstructor(listenerName) diff --git a/docs/architecture/adr-040-storage-and-smt-state-commitments.md b/docs/architecture/adr-040-storage-and-smt-state-commitments.md index 115723576091..6b9549b86bd2 100644 --- a/docs/architecture/adr-040-storage-and-smt-state-commitments.md +++ b/docs/architecture/adr-040-storage-and-smt-state-commitments.md @@ -110,6 +110,96 @@ We need to be able to process transactions and roll-back state updates if a tran We identified use-cases, where modules will need to save an object commitment without storing an object itself. Sometimes clients are receiving complex objects, and they have no way to prove a correctness of that object without knowing the storage layout. For those use cases it would be easier to commit to the object without storing it directly. +<<<<<<< HEAD +======= +### Refactor MultiStore + +The Stargate `/store` implementation (store/v1) adds an additional layer in the SDK store construction - the `MultiStore` structure. The multistore exists to support the modularity of the Cosmos SDK - each module is using its own instance of IAVL, but in the current implementation, all instances share the same database. The latter indicates, however, that the implementation doesn't provide true modularity. Instead it causes problems related to race condition and atomic DB commits (see: [\#6370](https://github.com/cosmos/cosmos-sdk/issues/6370) and [discussion](https://github.com/cosmos/cosmos-sdk/discussions/8297#discussioncomment-757043)). + +We propose to reduce the multistore concept from the SDK, and to use a single instance of `SC` and `SS` in a `RootStore` object. To avoid confusion, we should rename the `MultiStore` interface to `RootStore`. The `RootStore` will have the following interface; the methods for configuring tracing and listeners are omitted for brevity. + +```go +// Used where read-only access to versions is needed. +type BasicRootStore interface { + Store + GetKVStore(StoreKey) KVStore + CacheRootStore() CacheRootStore +} + +// Used as the main app state, replacing CommitMultiStore. +type CommitRootStore interface { + BasicRootStore + Committer + Snapshotter + + GetVersion(uint64) (BasicRootStore, error) + SetInitialVersion(uint64) error + + ... // Trace and Listen methods +} + +// Replaces CacheMultiStore for branched state. +type CacheRootStore interface { + BasicRootStore + Write() + + ... // Trace and Listen methods +} + +// Example of constructor parameters for the concrete type. +type RootStoreConfig struct { + Upgrades *StoreUpgrades + InitialVersion uint64 + + ReservePrefix(StoreKey, StoreType) +} +``` + + + + +In contrast to `MultiStore`, `RootStore` doesn't allow to dynamically mount sub-stores or provide an arbitrary backing DB for individual sub-stores. + +NOTE: modules will be able to use a special commitment and their own DBs. For example: a module which will use ZK proofs for state can store and commit this proof in the `RootStore` (usually as a single record) and manage the specialized store privately or using the `SC` low level interface. + +#### Compatibility support + +To ease the transition to this new interface for users, we can create a shim which wraps a `CommitMultiStore` but provides a `CommitRootStore` interface, and expose functions to safely create and access the underlying `CommitMultiStore`. + +The new `RootStore` and supporting types can be implemented in a `store/v2` package to avoid breaking existing code. + +#### Merkle Proofs and IBC + +Currently, an IBC (v1.0) Merkle proof path consists of two elements (`["", ""]`), with each key corresponding to a separate proof. These are each verified according to individual [ICS-23 specs](https://github.com/cosmos/ibc-go/blob/f7051429e1cf833a6f65d51e6c3df1609290a549/modules/core/23-commitment/types/merkle.go#L17), and the result hash of each step is used as the committed value of the next step, until a root commitment hash is obtained. +The root hash of the proof for `""` is hashed with the `""` to validate against the App Hash. + +This is not compatible with the `RootStore`, which stores all records in a single Merkle tree structure, and won't produce separate proofs for the store- and record-key. Ideally, the store-key component of the proof could just be omitted, and updated to use a "no-op" spec, so only the record-key is used. However, because the IBC verification code hardcodes the `"ibc"` prefix and applies it to the SDK proof as a separate element of the proof path, this isn't possible without a breaking change. Breaking this behavior would severely impact the Cosmos ecosystem which already widely adopts the IBC module. Requesting an update of the IBC module across the chains is a time consuming effort and not easily feasible. + +As a workaround, the `RootStore` will have to use two separate SMTs (they could use the same underlying DB): one for IBC state and one for everything else. A simple Merkle map that reference these SMTs will act as a Merkle Tree to create a final App hash. The Merkle map is not stored in a DBs - it's constructed in the runtime. The IBC substore key must be `"ibc"`. + +The workaround can still guarantee atomic syncs: the [proposed DB backends](#evaluated-kv-databases) support atomic transactions and efficient rollbacks, which will be used in the commit phase. + +The presented workaround can be used until the IBC module is fully upgraded to supports single-element commitment proofs. + +### Optimization: compress module key prefixes + +We consider a compression of prefix keys by creating a mapping from module key to an integer, and serializing the integer using varint coding. Varint coding assures that different values don't have common byte prefix. For Merkle Proofs we can't use prefix compression - so it should only apply for the `SS` keys. Moreover, the prefix compression should be only applied for the module namespace. More precisely: + ++ each module has it's own namespace; ++ when accessing a module namespace we create a KVStore with embedded prefix; ++ that prefix will be compressed only when accessing and managing `SS`. + +We need to assure that the codes won't change. We can fix the mapping in a static variable (provided by an app) or SS state under a special key. + +TODO: need to make decision about the key compression. + +## Optimization: SS key compression + +Some objects may be saved with key, which contains a Protobuf message type. Such keys are long. We could save a lot of space if we can map Protobuf message types in varints. + +TODO: finalize this or move to another ADR. + +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) ## Consequences ### Backwards Compatibility diff --git a/docs/architecture/adr-043-nft-module.md b/docs/architecture/adr-043-nft-module.md new file mode 100644 index 000000000000..99152f990e61 --- /dev/null +++ b/docs/architecture/adr-043-nft-module.md @@ -0,0 +1,340 @@ +# ADR 43: NFT Module + +## Changelog + +- 05.05.2021: Initial Draft +- 07.01.2021: Incorporate Billy's feedback +- 07.02.2021: Incorporate feedbacks from Aaron, Shaun, Billy et al. + +## Status + +DRAFT + +## Abstract + +This ADR defines the `x/nft` module which is a generic implementation of NFTs, roughly "compatible" with ERC721. **Applications using the `x/nft` module must implement the following functions**: + +- `MsgNewClass` - Receive the user's request to create a class, and call the `NewClass` of the `x/nft` module. +- `MsgUpdateClass` - Receive the user's request to update a class, and call the `UpdateClass` of the `x/nft` module. +- `MsgMintNFT` - Receive the user's request to mint a nft, and call the `MintNFT` of the `x/nft` module. +- `BurnNFT` - Receive the user's request to burn a nft, and call the `BurnNFT` of the `x/nft` module. +- `UpdateNFT` - Receive the user's request to update a nft, and call the `UpdateNFT` of the `x/nft` module. + +## Context + +NFTs are more than just crypto art, which is very helpful for accruing value to the Cosmos ecosystem. As a result, Cosmos Hub should implement NFT functions and enable a unified mechanism for storing and sending the ownership representative of NFTs as discussed in https://github.com/cosmos/cosmos-sdk/discussions/9065. + +As was discussed in [#9065](https://github.com/cosmos/cosmos-sdk/discussions/9065), several potential solutions can be considered: + +- irismod/nft and modules/incubator/nft +- CW721 +- DID NFTs +- interNFT + +Since functions/use cases of NFTs are tightly connected with their logic, it is almost impossible to support all the NFTs' use cases in one Cosmos SDK module by defining and implementing different transaction types. + +Considering generic usage and compatibility of interchain protocols including IBC and Gravity Bridge, it is preferred to have a generic NFT module design which handles the generic NFTs logic. + +This design idea can enable composability that application-specific functions should be managed by other modules on Cosmos Hub or on other Zones by importing the NFT module. + +The current design is based on the work done by [IRISnet team](https://github.com/irisnet/irismod/tree/master/modules/nft) and an older implementation in the [Cosmos repository](https://github.com/cosmos/modules/tree/master/incubator/nft). + +## Decision + +We will create a module `x/nft`, which contains the following functionality: + +- Store NFTs and track their ownership. +- Expose `Keeper` interface for composing modules to mint and burn NFTs. +- Expose external `Message` interface for users to transfer ownership of their NFTs. +- Query NFTs and their supply information. + +### Types + +#### Class + +We define a model for NFT **Class**, which is comparable to an ERC721 Contract on Ethereum, under which a collection of NFTs can be created and managed. + +```protobuf +message Class { + string id = 1; + string name = 2; + string symbol = 3; + string description = 4; + string uri = 5; + string uri_hash = 6; +} +``` + +- `id` is an alphanumeric identifier of the NFT class; it is used as the primary index for storing the class; _required_ +- `name` is a descriptive name of the NFT class; _optional_ +- `symbol` is the symbol usually shown on exchanges for the NFT class; _optional_ +- `description` is a detailed description of the NFT class; _optional_ +- `uri` is a URL pointing to an off-chain JSON file that contains metadata about this NFT class ([OpenSea example](https://docs.opensea.io/docs/contract-level-metadata)); _optional_ +- `uri_hash` is a hash of the `uri`; _optional_ + +#### NFT + +We define a general model for `NFT` as follows. + +```protobuf +message NFT { + string class_id = 1; + string id = 2; + string uri = 3; + string uri_hash = 4; + google.protobuf.Any data = 10; +} +``` + +- `class_id` is the identifier of the NFT class where the NFT belongs; _required_ +- `id` is an alphanumeric identifier of the NFT, unique within the scope of its class. It is specified by the creator of the NFT and may be expanded to use DID in the future. `class_id` combined with `id` uniquely identifies an NFT and is used as the primary index for storing the NFT; _required_ + + ``` + {class_id}/{id} --> NFT (bytes) + ``` + +- `uri` is a URL pointing to an off-chain JSON file that contains metadata about this NFT (Ref: [ERC721 standard and OpenSea extension](https://docs.opensea.io/docs/metadata-standards)); _required_ +- `uri_hash` is a hash of the `uri`; +- `data` is a field that CAN be used by composing modules to specify additional properties for the NFT; _optional_ + +This ADR doesn't specify values that `data` can take; however, best practices recommend upper-level NFT modules clearly specify their contents. Although the value of this field doesn't provide the additional context required to manage NFT records, which means that the field can technically be removed from the specification, the field's existence allows basic informational/UI functionality. + +### `Keeper` Interface + +```go +type Keeper interface { + NewClass(class Class) + UpdateClass(class Class) + + Mint(nft NFT,receiver sdk.AccAddress) // updates totalSupply + Burn(classId string, nftId string) // updates totalSupply + Update(nft NFT) + Transfer(classId string, nftId string, receiver sdk.AccAddress) + + GetClass(classId string) Class + GetClasses() []Class + + GetNFT(classId string, nftId string) NFT + GetNFTsOfClassByOwner(classId string, owner sdk.AccAddress) []NFT + GetNFTsOfClass(classId string) []NFT + + GetOwner(classId string, nftId string) sdk.AccAddress + GetBalance(classId string, owner sdk.AccAddress) uint64 + GetTotalSupply(classId string) uint64 +} +``` + +Other business logic implementations should be defined in composing modules that import `x/nft` and use its `Keeper`. + +### `Msg` Service + +```protobuf +service Msg { + rpc Send(MsgSend) returns (MsgSendResponse); +} + +message MsgSend { + string class_id = 1; + string id = 2; + string sender = 3; + string reveiver = 4; +} +message MsgSendResponse {} +``` + +`MsgSend` can be used to transfer the ownership of an NFT to another address. + +The implementation outline of the server is as follows: + +```go +type msgServer struct{ + k Keeper +} + +func (m msgServer) Send(ctx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) { + // check current ownership + assertEqual(msg.Sender, m.k.GetOwner(msg.ClassId, msg.Id)) + + // transfer ownership + m.k.Transfer(msg.ClassId, msg.Id, msg.Receiver) + + return &types.MsgSendResponse{}, nil +} +``` + +The query service methods for the `x/nft` module are: + +```proto +service Query { + + // Balance queries the number of NFTs of a given class owned by the owner, same as balanceOf in ERC721 + rpc Balance(QueryBalanceRequest) returns (QueryBalanceResponse) { + option (google.api.http).get = "/cosmos/nft/v1beta1/balance/{class_id}/{owner}"; + } + + // Owner queries the owner of the NFT based on its class and id, same as ownerOf in ERC721 + rpc Owner(QueryOwnerRequest) returns (QueryOwnerResponse) { + option (google.api.http).get = "/cosmos/nft/v1beta1/owner/{class_id}/{id}"; + } + + // Supply queries the number of NFTs of a given class, same as totalSupply in ERC721Enumerable + rpc Supply(QuerySupplyRequest) returns (QuerySupplyResponse) { + option (google.api.http).get = "/cosmos/nft/v1beta1/supply/{class_id}"; + } + + // NFTsOfClassByOwner queries the NFTs of a given class owned by the owner, similar to tokenOfOwnerByIndex in ERC721Enumerable + rpc NFTsOfClassByOwner(QueryNFTsOfClassByOwnerRequest) returns (QueryNFTsResponse) { + option (google.api.http).get = "/cosmos/nft/v1beta1/owned_nfts/{class_id}/{owner}"; + } + + // NFTsOfClass queries all NFTs of a given class, similar to tokenByIndex in ERC721Enumerable + rpc NFTsOfClass(QueryNFTsOfClassRequest) returns (QueryNFTsResponse) { + option (google.api.http).get = "/cosmos/nft/v1beta1/nfts/{class_id}"; + } + + // NFT queries an NFT based on its class and id. + rpc NFT(QueryNFTRequest) returns (QueryNFTResponse) { + option (google.api.http).get = "/cosmos/nft/v1beta1/nfts/{class_id}/{id}"; + } + + // Class queries an NFT class based on its id + rpc Class(QueryClassRequest) returns (QueryClassResponse) { + option (google.api.http).get = "/cosmos/nft/v1beta1/classes/{class_id}"; + } + + // Classes queries all NFT classes + rpc Classes(QueryClassesRequest) returns (QueryClassesResponse) { + option (google.api.http).get = "/cosmos/nft/v1beta1/classes"; + } +} + +// QueryBalanceRequest is the request type for the Query/Balance RPC method +message QueryBalanceRequest { + string class_id = 1; + string owner = 2; +} + +// QueryBalanceResponse is the response type for the Query/Balance RPC method +message QueryBalanceResponse{ + uint64 amount = 1; +} + +// QueryOwnerRequest is the request type for the Query/Owner RPC method +message QueryOwnerRequest { + string class_id = 1; + string id = 2; +} + +// QueryOwnerResponse is the response type for the Query/Owner RPC method +message QueryOwnerResponse{ + string owner = 1; +} + +// QuerySupplyRequest is the request type for the Query/Supply RPC method +message QuerySupplyRequest { + string class_id = 1; +} + +// QuerySupplyResponse is the response type for the Query/Supply RPC method +message QuerySupplyResponse { + uint64 amount = 1; +} + +// QueryNFTsOfClassByOwnerRequest is the request type for the Query/NFTsOfClassByOwner RPC method +message QueryNFTsOfClassByOwnerRequest { + string class_id = 1; + string owner = 2; + cosmos.base.query.v1beta1.PageResponse pagination = 3; +} + +// QueryNFTsOfClassRequest is the request type for the Query/NFTsOfClass RPC method +message QueryNFTsOfClassRequest { + string class_id = 1; + cosmos.base.query.v1beta1.PageResponse pagination = 2; +} + +// QueryNFTsResponse is the response type for the Query/NFTsOfClass and Query/NFTsOfClassByOwner RPC methods +message QueryNFTsResponse { + repeated cosmos.nft.v1beta1.NFT nfts = 1; + cosmos.base.query.v1beta1.PageResponse pagination = 2; +} + +// QueryNFTRequest is the request type for the Query/NFT RPC method +message QueryNFTRequest { + string class_id = 1; + string id = 2; +} + +// QueryNFTResponse is the response type for the Query/NFT RPC method +message QueryNFTResponse { + cosmos.nft.v1beta1.NFT nft = 1; +} + +// QueryClassRequest is the request type for the Query/Class RPC method +message QueryClassRequest { + string class_id = 1; +} + +// QueryClassResponse is the response type for the Query/Class RPC method +message QueryClassResponse { + cosmos.nft.v1beta1.Class class = 1; +} + +// QueryClassesRequest is the request type for the Query/Classes RPC method +message QueryClassesRequest { + // pagination defines an optional pagination for the request. + cosmos.base.query.v1beta1.PageRequest pagination = 1; +} + +// QueryClassesResponse is the response type for the Query/Classes RPC method +message QueryClassesResponse { + repeated cosmos.nft.v1beta1.Class classes = 1; + cosmos.base.query.v1beta1.PageResponse pagination = 2; +} +``` + +### Interoperability + +Interoperability is all about reusing assets between modules and chains. The former one is achieved by ADR-33: Protobuf client - server communication. At the time of writing ADR-33 is not finalized. The latter is achieved by IBC. Here we will focus on the IBC side. +IBC is implemented per module. Here, we aligned that NFTs will be recorded and managed in the x/nft. This requires creation of a new IBC standard and implementation of it. + +For IBC interoperability, NFT custom modules MUST use the NFT object type understood by the IBC client. So, for x/nft interoperability, custom NFT implementations (example: x/cryptokitty) should use the canonical x/nft module and proxy all NFT balance keeping functionality to x/nft or else re-implement all functionality using the NFT object type understood by the IBC client. In other words: x/nft becomes the standard NFT registry for all Cosmos NFTs (example: x/cryptokitty will register a kitty NFT in x/nft and use x/nft for book keeping). This was [discussed](https://github.com/cosmos/cosmos-sdk/discussions/9065#discussioncomment-873206) in the context of using x/bank as a general asset balance book. Not using x/nft will require implementing another module for IBC. + +## Consequences + +### Backward Compatibility + +No backward incompatibilities. + +### Forward Compatibility + +This specification conforms to the ERC-721 smart contract specification for NFT identifiers. Note that ERC-721 defines uniqueness based on (contract address, uint256 tokenId), and we conform to this implicitly because a single module is currently aimed to track NFT identifiers. Note: use of the (mutable) data field to determine uniqueness is not safe.s + +### Positive + +- NFT identifiers available on Cosmos Hub. +- Ability to build different NFT modules for the Cosmos Hub, e.g., ERC-721. +- NFT module which supports interoperability with IBC and other cross-chain infrastructures like Gravity Bridge + +### Negative + ++ New IBC app is required for x/nft + +### Neutral + +- Other functions need more modules. For example, a custody module is needed for NFT trading function, a collectible module is needed for defining NFT properties. + +## Further Discussions + +For other kinds of applications on the Hub, more app-specific modules can be developed in the future: + +- `x/nft/custody`: custody of NFTs to support trading functionality. +- `x/nft/marketplace`: selling and buying NFTs using sdk.Coins. + +Other networks in the Cosmos ecosystem could design and implement their own NFT modules for specific NFT applications and use cases. + +## References + +- Initial discussion: https://github.com/cosmos/cosmos-sdk/discussions/9065 +- x/nft: initialize module: https://github.com/cosmos/cosmos-sdk/pull/9174 +- [ADR 033](https://github.com/cosmos/cosmos-sdk/blob/master/docs/architecture/adr-033-protobuf-inter-module-comm.md) diff --git a/docs/architecture/adr-044-protobuf-updates-guidelines.md b/docs/architecture/adr-044-protobuf-updates-guidelines.md new file mode 100644 index 000000000000..a76a7579ba07 --- /dev/null +++ b/docs/architecture/adr-044-protobuf-updates-guidelines.md @@ -0,0 +1,109 @@ +# ADR 044: Guidelines for Updating Protobuf Definitions + +## Changelog + +- 28.06.2021: Initial Draft + +## Status + +Draft + +## Abstract + +This ADR provides guidelines and recommended practices when updating Protobuf definitions. These guidelines are targeting module developers. + +## Context + +The Cosmos SDK maintains a set of [Protobuf definitions](https://github.com/cosmos/cosmos-sdk/tree/master/proto/cosmos). It is important to correctly design Protobuf definitions to avoid any breaking changes within the same version. The reasons are to not break tooling (including indexers and explorers), wallets and other third-party integrations. + +When making changes to these Protobuf definitions, the Cosmos SDK currently only follows [Buf's](https://docs.buf.build/) recommendations. We noticed however that Buf's recommendations might still result in breaking changes in the SDK in some cases. For example: + +- Adding fields to `Msg`s. Adding fields is a not a Protobuf spec-breaking operation. However, when adding new fields to `Msg`s, the unknown field rejection will throw an error when sending the new `Msg` to an older node. +- Marking fields as `reserved`. Protobuf proposes the `reserved` keyword for removing fields without the need to bump the package version. However, by doing so, client backwards compatibility is broken as Protobuf doesn't generate anything for `reserved` fields. See [#9446](https://github.com/cosmos/cosmos-sdk/issues/9446) for more details on this issue. + +Moreover, module developers often face other questions around Protobuf definitions such as "Can I rename a field?" or "Can I deprecate a field?" This ADR aims to answer all these questions by providing clear guidelines about allowed updates for Protobuf definitions. + +## Decision + +We decide to keep [Buf's](https://docs.buf.build/) recommendations with the following exceptions: + +- `UNARY_RPC`: the Cosmos SDK currently does not support streaming RPCs. +- `COMMENT_FIELD`: the Cosmos SDK allows fields with no comments. +- `SERVICE_SUFFIX`: we use the `Query` and `Msg` service naming convention, which doesn't use the `-Service` suffix. +- `PACKAGE_VERSION_SUFFIX`: some packages, such as `cosmos.crypto.ed25519`, don't use a version suffix. +- `RPC_REQUEST_STANDARD_NAME`: Requests for the `Msg` service don't have the `-Request` suffix to keep backwards compatibility. + +On top of Buf's recommendations we add the following guidelines that are specific to the Cosmos SDK. + +### Updating Protobuf Definition Without Bumping Version + +#### 1. `Msg`s MUST NOT have new fields + +When processing `Msg`s, the Cosmos SDK's antehandlers are strict and don't allow unknown fields in `Msg`s. This is checked by the unknown field rejection in the [`codec/unknownproto` package](https://github.com/cosmos/cosmos-sdk/blob/master/codec/unknownproto). + +Now imagine a v0.43 node accepting a `MsgExample` transaction, and in v0.44 the chain developer decides to add a field to `MsgExample`. A client developer, which only manipulates Protobuf definitions, would see that `MsgExample` has a new field, and will populate it. However, sending the new `MsgExample` to an old v0.43 node would cause the v0.43 node to reject the `MsgExample` because of the unknown field. The expectation that the same Protobuf version can be used across multiple node versions MUST be guaranteed. + +For this reason, module developers MUST NOT add new fields to existing `Msg`s. + +It is worth mentioning that this does not limit adding fields to a `Msg`, but also to all nested structs and `Any`s inside a `Msg`. + +#### 2. Non-`Msg`-related Protobuf definitions MAY have new fields + +On the other hand, module developers MAY add new fields to Protobuf definitions related to the `Query` service or to objects which are saved in the store. This recommendation follows the Protobuf specification, but is added in this document for clarity. + +#### 3. Fields MAY be marked as `deprecated`, and nodes MAY implement a protocol-breaking change for handling these fields + +Protobuf supports the [`deprecated` field option](https://developers.google.com/protocol-buffers/docs/proto#options), and this option MAY be used on any field, including `Msg` fields. If a node handles a Protobuf message with a non-empty deprecated field, the node MAY change its behavior upon processing it, even in a protocol-breaking way. When possible, the node MUST handle backwards compatibility without breaking the consensus (unless we increment the proto version). + +As an example, the Cosmos SDK v0.42 to v0.43 update contained two Protobuf-breaking changes, listed below. Instead of bumping the package versions from `v1beta1` to `v1`, the SDK team decided to follow this guideline, by reverting the breaking changes, marking those changes as deprecated, and modifying the node implementation when processing messages with deprecated fields. More specifically: + +- The Cosmos SDK recently removed support for [time-based software upgrades](https://github.com/cosmos/cosmos-sdk/pull/8849). As such, the `time` field has been marked as deprecated in `cosmos.upgrade.v1beta1.Plan`. Moreover, the node will reject any proposal containing an upgrade Plan whose `time` field is non-empty. +- The Cosmos SDK now supports [governance split votes](./adr-037-gov-split-vote.md). When querying for votes, the returned `cosmos.gov.v1beta1.Vote` message has its `option` field (used for 1 vote option) deprecated in favor of its `options` field (allowing multiple vote options). Whenever possible, the SDK still populates the deprecated `option` field, that is, if and only if the `len(options) == 1` and `options[0].Weight == 1.0`. + +#### 4. Fields MUST NOT be renamed + +Whereas the official Protobuf recommendations do not prohibit renaming fields, as it does not break the Protobuf binary representation, the SDK explicitly forbids renaming fields in Protobuf structs. The main reason for this choice is to avoid introducing breaking changes for clients, which often rely on hard-coded fields from generated types. Moreover, renaming fields will lead to client-breaking JSON representations of Protobuf definitions, used in REST endpoints and in the CLI. + +### Incrementing Protobuf Package Version + +TODO, needs architecture review. Some topics: + +- Bumping versions frequency +- When bumping versions, should the Cosmos SDK support both versions? + - i.e. v1beta1 -> v1, should we have two folders in the Cosmos SDK, and handlers for both versions? +- mention ADR-023 Protobuf naming + +## Consequences + +> This section describes the resulting context, after applying the decision. All consequences should be listed here, not just the "positive" ones. A particular decision may have positive, negative, and neutral consequences, but all of them affect the team and project in the future. + +### Backwards Compatibility + +> All ADRs that introduce backwards incompatibilities must include a section describing these incompatibilities and their severity. The ADR must explain how the author proposes to deal with these incompatibilities. ADR submissions without a sufficient backwards compatibility treatise may be rejected outright. + +### Positive + +- less pain to tool developers +- more compatibility in the ecosystem +- ... + +### Negative + +{negative consequences} + +### Neutral + +- more rigor in Protobuf review + +## Further Discussions + +This ADR is still in the DRAFT stage, and the "Incrementing Protobuf Package Version" will be filled in once we make a decision on how to correctly do it. + +## Test Cases [optional] + +Test cases for an implementation are mandatory for ADRs that are affecting consensus changes. Other ADRs can choose to include links to test cases if applicable. + +## References + +- [#9445](https://github.com/cosmos/cosmos-sdk/issues/9445) Release proto definitions v1 +- [#9446](https://github.com/cosmos/cosmos-sdk/issues/9446) Address v1beta1 proto breaking changes diff --git a/docs/architecture/adr-046-module-params.md b/docs/architecture/adr-046-module-params.md new file mode 100644 index 000000000000..520c79884e82 --- /dev/null +++ b/docs/architecture/adr-046-module-params.md @@ -0,0 +1,184 @@ +# ADR 046: Module Params + +## Changelog + +- Sep 22, 2021: Initial Draft + +## Status + +Proposed + +## Abstract + +This ADR describes an alternative approach to how Cosmos SDK modules use, interact, +and store their respective parameters. + +## Context + +Currently, in the Cosmos SDK, modules that require the use of parameters use the +`x/params` module. The `x/params` works by having modules define parameters, +typically via a simple `Params` structure, and registering that structure in +the `x/params` module via a unique `Subspace` that belongs to the respective +registering module. The registering module then has unique access to its respective +`Subspace`. Through this `Subspace`, the module can get and set its `Params` +structure. + +In addition, the Cosmos SDK's `x/gov` module has direct support for changing +parameters on-chain via a `ParamChangeProposal` governance proposal type, where +stakeholders can vote on suggested parameter changes. + +There are various tradeoffs to using the `x/params` module to manage individual +module parameters. Namely, managing parameters essentially comes for "free" in +that developers only need to define the `Params` struct, the `Subspace`, and the +various auxiliary functions, e.g. `ParamSetPairs`, on the `Params` type. However, +there are some notable drawbacks. These drawbacks include the fact that parameters +are serialized in state via JSON which is extremely slow. In addition, parameter +changes via `ParamChangeProposal` governance proposals have no way of reading from +or writing to state. In other words, it is currently not possible to have any +state transitions in the application during an attempt to change param(s). + +## Decision + +We will build off of the alignment of `x/gov` and `x/authz` work per +[#9810](https://github.com/cosmos/cosmos-sdk/pull/9810). Namely, module developers +will create one or more unique parameter data structures that must be serialized +to state. The Param data structures must implement `sdk.Msg` interface with respective +Protobuf Msg service method which will validate and update the parameters with all +necessary changes. The `x/gov` module via the work done in +[#9810](https://github.com/cosmos/cosmos-sdk/pull/9810), will dispatch Param +messages, which will be handled by Protobuf Msg services. + +Note, it is up to developers to decide how to structure their parameters and +the respective `sdk.Msg` messages. Consider the parameters currently defined in +`x/auth` using the `x/params` module for parameter management: + +```protobuf +message Params { + uint64 max_memo_characters = 1; + uint64 tx_sig_limit = 2; + uint64 tx_size_cost_per_byte = 3; + uint64 sig_verify_cost_ed25519 = 4; + uint64 sig_verify_cost_secp256k1 = 5; +} +``` + +Developers can choose to either create a unique data structure for every field in +`Params` or they can create a single `Params` structure as outlined above in the +case of `x/auth`. + +In the former, `x/params`, approach, a `sdk.Msg` would need to be created for every single +field along with a handler. This can become burdensome if there are a lot of +parameter fields. In the latter case, there is only a single data structure and +thus only a single message handler, however, the message handler might have to be +more sophisticated in that it might need to understand what parameters are being +changed vs what parameters are untouched. + +Params change proposals are made using the `x/gov` module. Execution is done through +`x/authz` authorization to the root `x/gov` module's account. + +Continuing to use `x/auth`, we demonstrate a more complete example: + +```go +type Params struct { + MaxMemoCharacters uint64 + TxSigLimit uint64 + TxSizeCostPerByte uint64 + SigVerifyCostED25519 uint64 + SigVerifyCostSecp256k1 uint64 +} + +type MsgUpdateParams struct { + MaxMemoCharacters uint64 + TxSigLimit uint64 + TxSizeCostPerByte uint64 + SigVerifyCostED25519 uint64 + SigVerifyCostSecp256k1 uint64 +} + +type MsgUpdateParamsResponse struct {} + +func (ms msgServer) UpdateParams(goCtx context.Context, msg *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { + ctx := sdk.UnwrapSDKContext(goCtx) + + // verification logic... + + // persist params + params := ParamsFromMsg(msg) + ms.SaveParams(ctx, params) + + return &types.MsgUpdateParamsResponse{}, nil +} + +func ParamsFromMsg(msg *types.MsgUpdateParams) Params { + // ... +} +``` + +A gRPC `Service` query should also be provided, for example: + +```protobuf +service Query { + // ... + + rpc Params(QueryParamsRequest) returns (QueryParamsResponse) { + option (google.api.http).get = "/cosmos//v1beta1/params"; + } +} + +message QueryParamsResponse { + Params params = 1 [(gogoproto.nullable) = false]; +} +``` + +## Consequences + +As a result of implementing the module parameter methodology, we gain the ability +for module parameter changes to be stateful and extensible to fit nearly every +application's use case. We will be able to emit events (and trigger hooks registered +to that events using the work proposed in [even hooks](https://github.com/cosmos/cosmos-sdk/discussions/9656)), +call other Msg service methods or perform migration. +In addition, there will be significant gains in performance when it comes to reading +and writing parameters from and to state, especially if a specific set of parameters +are read on a consistent basis. + +However, this methodology will require developers to implement more types and +Msg service metohds which can become burdensome if many parameters exist. In addition, +developers are required to implement persistance logics of module parameters. +However, this should be trivial. + +### Backwards Compatibility + +The new method for working with module parameters is naturally not backwards +compatible with the existing `x/params` module. However, the `x/params` will +remain in the Cosmos SDK and will be marked as deprecated with no additional +functionality being added apart from potential bug fixes. Note, the `x/params` +module may be removed entirely in a future release. + +### Positive + +- Module parameters are serialized more efficiently +- Modules are able to react on parameters changes and perform additional actions. +- Special events can be emitted, allowing hooks to be triggered. + +### Negative + +- Module parameters becomes slightly more burdensome for module developers: + - Modules are now responsible for persisting and retrieving parameter state + - Modules are now required to have unique message handlers to handle parameter + changes per unique parameter data structure. + +### Neutral + +- Requires [#9810](https://github.com/cosmos/cosmos-sdk/pull/9810) to be reviewed + and merged. + + + +## References + +- https://github.com/cosmos/cosmos-sdk/pull/9810 +- https://github.com/cosmos/cosmos-sdk/issues/9438 +- https://github.com/cosmos/cosmos-sdk/discussions/9913 diff --git a/docs/migrations/pre-upgrade.md b/docs/migrations/pre-upgrade.md new file mode 100644 index 000000000000..00b4c54e2142 --- /dev/null +++ b/docs/migrations/pre-upgrade.md @@ -0,0 +1,55 @@ +# Pre-Upgrade Handling + +Cosmovisor supports custom pre-upgrade handling. Use pre-upgrade handling when you need to implement application config changes that are required in the newer version before you perform the upgrade. + +Using Cosmovisor pre-upgrade handling is optional. If pre-upgrade handling is not implemented, the upgrade continues. + +For example, make the required new-version changes to `app.toml` settings during the pre-upgrade handling. The pre-upgrade handling process means that the file does not have to be manually updated after the upgrade. + +Before the application binary is upgraded, Cosmovisor calls a `pre-upgrade` command that can be implemented by the application. + +The `pre-upgrade` command does not take in any command-line arguments and is expected to terminate with the following exit codes: + +| Exit status code | How it is handled in Cosmosvisor | +|------------------|---------------------------------------------------------------------------------------------------------------------| +| `0` | Assumes `pre-upgrade` command executed successfully and continues the upgrade. | +| `1` | Default exit code when `pre-upgrade` command has not been implemented. | +| `30` | `pre-upgrade` command was executed but failed. This fails the entire upgrade. | +| `31` | `pre-upgrade` command was executed but failed. But the command is retried until exit code `1` or `30` are returned. | + +## Sample + +Here is a sample structure of the `pre-upgrade` command: + +```go +func preUpgradeCommand() *cobra.Command { + cmd := &cobra.Command{ + Use: "pre-upgrade", + Short: "Pre-upgrade command", + Long: "Pre-upgrade command to implement custom pre-upgrade handling", + Run: func(cmd *cobra.Command, args []string) { + + err := HandlePreUpgrade() + + if err != nil { + os.Exit(30) + } + + os.Exit(0) + + }, + } + + return cmd +} +``` + +Ensure that the pre-upgrade command has been registered in the application: + +```go +rootCmd.AddCommand( + // .. + preUpgradeCommand(), + // .. + ) +``` diff --git a/docs/migrations/rest.md b/docs/migrations/rest.md index 6ed555613f84..dc767358239a 100644 --- a/docs/migrations/rest.md +++ b/docs/migrations/rest.md @@ -102,4 +102,8 @@ Previously, some modules exposed legacy `POST` endpoints to generate unsigned tr ## Migrating to gRPC +<<<<<<< HEAD Instead of hitting REST endpoints as described in the previous paragraph, the SDK also exposes a gRPC server. Any client can use gRPC instead of REST to interact with the node. An overview of different ways to communicate with a node can be found [here](../core/grpc_rest.md), and a concrete tutorial for setting up a gRPC client [here](../run-node/txs.md#programmatically-with-go). +======= +Instead of hitting REST endpoints as described above, the Cosmos SDK also exposes a gRPC server. Any client can use gRPC instead of REST to interact with the node. An overview of different ways to communicate with a node can be found [here](../core/grpc_rest.md), and a concrete tutorial for setting up a gRPC client can be found [here](../run-node/txs.md#programmatically-with-go). +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) diff --git a/docs/ru/README.md b/docs/ru/README.md new file mode 100755 index 000000000000..e6906b2b89b3 --- /dev/null +++ b/docs/ru/README.md @@ -0,0 +1,3 @@ +# Cosmos SDK Documentation (Russian) + +A Russian translation of the Cosmos SDK documentation is not available for this version. If you would like to help with translating, please see [Internationalization](https://github.com/cosmos/cosmos-sdk/blob/master/docs/DOCS_README.md#internationalization). A `v0.39` version of the documentation can be found [here](https://github.com/cosmos/cosmos-sdk/tree/v0.39.3/docs/ru). diff --git a/docs/run-node/rosetta.md b/docs/run-node/rosetta.md index 98121867cfb7..36ad7c14af7e 100644 --- a/docs/run-node/rosetta.md +++ b/docs/run-node/rosetta.md @@ -1,6 +1,61 @@ # Rosetta +<<<<<<< HEAD Package rosetta implements the rosetta API for the current cosmos sdk release series. +======= +The `rosetta` package implements Coinbase's [Rosetta API](https://www.rosetta-api.org). This document provides instructions on how to use the Rosetta API integration. For information about the motivation and design choices, refer to [ADR 035](../architecture/adr-035-rosetta-api-support.md). + +## Add Rosetta Command + +The Rosetta API server is a stand-alone server that connects to a node of a chain developed with Cosmos SDK. + +To enable Rosetta API support, it's required to add the `RosettaCommand` to your application's root command file (e.g. `appd/cmd/root.go`). + +Import the `server` package: + +```go + "github.com/cosmos/cosmos-sdk/server" +``` + +Find the following line: + +```go +initRootCmd(rootCmd, encodingConfig) +``` + +After that line, add the following: + +```go +rootCmd.AddCommand( + server.RosettaCommand(encodingConfig.InterfaceRegistry, encodingConfig.Marshaler) +) +``` + +The `RosettaCommand` function builds the `rosetta` root command and is defined in the `server` package within Cosmos SDK. + +Since we’ve updated the Cosmos SDK to work with the Rosetta API, updating the application's root command file is all you need to do. + +An implementation example can be found in `simapp` package. + +## Use Rosetta Command + +To run Rosetta in your application CLI, use the following command: + +``` +appd rosetta --help +``` + +To test and run Rosetta API endpoints for applications that are running and exposed, use the following command: + +``` +appd rosetta + --blockchain "your application name (ex: gaia)" + --network "your chain identifier (ex: testnet-1)" + --tendermint "tendermint endpoint (ex: localhost:26657)" + --grpc "gRPC endpoint (ex: localhost:9090)" + --addr "rosetta binding address (ex: :8080)" +``` +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) ## Extension diff --git a/docs/run-node/run-node.md b/docs/run-node/run-node.md index 119dca558426..9bc97e2e2f2e 100644 --- a/docs/run-node/run-node.md +++ b/docs/run-node/run-node.md @@ -39,6 +39,29 @@ The `~/.simapp` folder has the following structure: |- priv_validator_key.json # Private key to use as a validator in the consensus protocol. ``` +<<<<<<< HEAD +======= +## Updating Some Default Settings + +If you want to change any field values in configuration files (for ex: genesis.json) you can use `jq` ([installation](https://stedolan.github.io/jq/download/) & [docs](https://stedolan.github.io/jq/manual/#Assignment)) & `sed` commands to do that. Few examples are listed here. + +```bash +# to change the chain-id +jq '.chain_id = "testing"' genesis.json > temp.json && mv temp.json genesis.json + +# to enable the api server +sed -i '/\[api\]/,+3 s/enable = false/enable = true/' app.toml + +# to change the voting_period +jq '.app_state.gov.voting_params.voting_period = "600s"' genesis.json > temp.json && mv temp.json genesis.json + +# to change the inflation +jq '.app_state.mint.minter.inflation = "0.300000000000000000"' genesis.json > temp.json && mv temp.json genesis.json +``` + +## Adding Genesis Accounts + +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) Before starting the chain, you need to populate the state with at least one account. To do so, first [create a new account in the keyring](./keyring.md#adding-keys-to-the-keyring) named `my_validator` under the `test` keyring backend (feel free to choose another name and another backend). Now that you have created a local account, go ahead and grant it some `stake` tokens in your chain's genesis file. Doing so will also make sure your chain is aware of this account's existence: diff --git a/docs/run-node/run-testnet.md b/docs/run-node/run-testnet.md new file mode 100644 index 000000000000..0cbfe8cf9789 --- /dev/null +++ b/docs/run-node/run-testnet.md @@ -0,0 +1,99 @@ + + +# Running a Testnet + +The `simd testnet` subcommand makes it easy to initialize and start a simulated test network for testing purposes. {synopsis} + +In addition to the commands for [running a node](./run-node.html), the `simd` binary also includes a `testnet` command that allows you to start a simulated test network in-process or to initialize files for a simulated test network that runs in a separate process. + +## Initialize Files + +First, let's take a look at the `init-files` subcommand. + +This is similar to the `init` command when initializing a single node, but in this case we are initializing multiple nodes, generating the genesis transactions for each node, and then collecting those transactions. + +The `init-files` subcommand initializes the necessary files to run a test network in a separate process (i.e. using a Docker container). Running this command is not a prerequisite for the `start` subcommand ([see below](#start-testnet)). + +In order to initialize the files for a test network, run the following command: + +```bash +simd testnet init-files +``` + +You should see the following output in your terminal: + +```bash +Successfully initialized 4 node directories +``` + +The default output directory is a relative `.testnets` directory. Let's take a look at the files created within the `.testnets` directory. + +### gentxs + +The `gentxs` directory includes a genesis transaction for each validator node. Each file includes a JSON encoded genesis transaction used to register a validator node at the time of genesis. The genesis transactions are added to the `genesis.json` file within each node directory during the initilization process. + +### nodes + +A node directory is created for each validator node. Within each node directory is a `simd` directory. The `simd` directory is the home directory for each node, which includes the configuration and data files for that node (i.e. the same files included in the default `~/.simapp` directory when running a single node). + +## Start Testnet + +Now, let's take a look at the `start` subcommand. + +The `start` subcommand both initializes and starts an in-process test network. This is the fastest way to spin up a local test network for testing purposes. + +You can start the local test network by running the following command: + +```bash +simd testnet start +``` + +You should see something similar to the following: + +```bash +acquiring test network lock +preparing test network with chain-id "chain-mtoD9v" + + ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++ THIS MNEMONIC IS FOR TESTING PURPOSES ONLY ++ +++ DO NOT USE IN PRODUCTION ++ +++ ++ +++ sustain know debris minute gate hybrid stereo custom ++ +++ divorce cross spoon machine latin vibrant term oblige ++ +++ moment beauty laundry repeat grab game bronze truly ++ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + + +starting test network... +started test network +press the Enter Key to terminate +``` + +The first validator node is now running in-process, which means the test network will terminate once you either close the terminal window or you press the Enter key. In the output, the mnemonic phrase for the first validator node is provided for testing purposes. The validator node is using the same default addresses being used when initializing and starting a single node (no need to provide a `--node` flag). + +Check the status of the first validator node: + +``` +simd status +``` + +Import the key from the provided mnemonic: + +``` +simd keys add test --recover --keyring-backend test +``` + +Check the balance of the account address: + +``` +simd q bank balances [address] +``` + +Use this test account to manually test against the test network. + +## Testnet Options + +You can customize the configuration of the test network with flags. In order to see all flag options, append the `--help` flag to each command. diff --git a/go.mod b/go.mod index 744c5e5bbb99..5acfde188543 100644 --- a/go.mod +++ b/go.mod @@ -56,6 +56,79 @@ require ( gopkg.in/yaml.v2 v2.4.0 ) +<<<<<<< HEAD +======= +require ( + filippo.io/edwards25519 v1.0.0-beta.2 // indirect + github.com/ChainSafe/go-schnorrkel v0.0.0-20200405005733-88cbf1b4c40d // indirect + github.com/DataDog/zstd v1.4.5 // indirect + github.com/Workiva/go-datastructures v1.0.52 // indirect + github.com/beorn7/perks v1.0.1 // indirect + github.com/cespare/xxhash v1.1.0 // indirect + github.com/cespare/xxhash/v2 v2.1.1 // indirect + github.com/cosmos/ledger-go v0.9.2 // indirect + github.com/danieljoos/wincred v1.0.2 // indirect + github.com/davecgh/go-spew v1.1.1 // indirect + github.com/desertbit/timer v0.0.0-20180107155436-c41aec40b27f // indirect + github.com/dgraph-io/badger/v2 v2.2007.2 // indirect + github.com/dgraph-io/ristretto v0.1.0 // indirect + github.com/dgryski/go-farm v0.0.0-20200201041132-a6ae2369ad13 // indirect + github.com/dustin/go-humanize v1.0.0 // indirect + github.com/dvsekhvalnov/jose2go v0.0.0-20200901110807-248326c1351b // indirect + github.com/felixge/httpsnoop v1.0.1 // indirect + github.com/fsnotify/fsnotify v1.5.1 // indirect + github.com/go-kit/kit v0.10.0 // indirect + github.com/go-logfmt/logfmt v0.5.0 // indirect + github.com/godbus/dbus v0.0.0-20190726142602-4481cbc300e2 // indirect + github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b // indirect + github.com/golang/snappy v0.0.3 // indirect + github.com/google/btree v1.0.0 // indirect + github.com/google/orderedcode v0.0.1 // indirect + github.com/gorilla/websocket v1.4.2 // indirect + github.com/gsterjov/go-libsecret v0.0.0-20161001094733-a6f4afe4910c // indirect + github.com/gtank/merlin v0.1.1 // indirect + github.com/gtank/ristretto255 v0.1.2 // indirect + github.com/hashicorp/go-immutable-radix v1.0.0 // indirect + github.com/hashicorp/hcl v1.0.0 // indirect + github.com/inconshreveable/mousetrap v1.0.0 // indirect + github.com/jmhodges/levigo v1.0.0 // indirect + github.com/keybase/go-keychain v0.0.0-20190712205309-48d3d31d256d // indirect + github.com/klauspost/compress v1.12.3 // indirect + github.com/lib/pq v1.10.2 // indirect + github.com/libp2p/go-buffer-pool v0.0.2 // indirect + github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect + github.com/mimoo/StrobeGo v0.0.0-20181016162300-f8f6d4d2b643 // indirect + github.com/minio/highwayhash v1.0.1 // indirect + github.com/mitchellh/mapstructure v1.4.2 // indirect + github.com/mtibben/percent v0.2.1 // indirect + github.com/pelletier/go-toml v1.9.4 // indirect + github.com/petermattis/goid v0.0.0-20180202154549-b0b1615b78e5 // indirect + github.com/pmezard/go-difflib v1.0.0 // indirect + github.com/prometheus/client_model v0.2.0 // indirect + github.com/prometheus/procfs v0.6.0 // indirect + github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0 // indirect + github.com/rs/cors v1.7.0 // indirect + github.com/sasha-s/go-deadlock v0.2.1-0.20190427202633-1595213edefa // indirect + github.com/spf13/afero v1.6.0 // indirect + github.com/spf13/jwalterweatherman v1.1.0 // indirect + github.com/subosito/gotenv v1.2.0 // indirect + github.com/syndtr/goleveldb v1.0.1-0.20200815110645-5c35d600f0ca // indirect + github.com/tecbot/gorocksdb v0.0.0-20191217155057-f0fad39f321c // indirect + github.com/zondax/hid v0.9.0 // indirect + go.etcd.io/bbolt v1.3.5 // indirect + golang.org/x/net v0.0.0-20210903162142-ad29c8ab022f // indirect + golang.org/x/sys v0.0.0-20210903071746-97244b99971b // indirect + golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1 // indirect + golang.org/x/text v0.3.6 // indirect + gopkg.in/ini.v1 v1.63.2 // indirect + gopkg.in/yaml.v2 v2.4.0 // indirect + gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect + nhooyr.io/websocket v1.8.6 // indirect +) + +// latest grpc doesn't work with with our modified proto compiler, so we need to enforce +// the following version across all dependencies. +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) replace google.golang.org/grpc => google.golang.org/grpc v1.33.2 replace github.com/gogo/protobuf => github.com/regen-network/protobuf v1.3.3-alpha.regen.1 diff --git a/scripts/module-tests.sh b/scripts/module-tests.sh new file mode 100644 index 000000000000..86998b5aa99f --- /dev/null +++ b/scripts/module-tests.sh @@ -0,0 +1,48 @@ +#!/usr/bin/env bash + +# this script is used by Github CI to tranverse all modules an run module tests. +# the script expects a diff to be generated in order to skip some modules. + +# Executes go module tests and merges the coverage profile. +# If GIT_DIFF variable is set then it's used to test if a module has any file changes - if +# it doesn't have any file changes then we will ignore the module tests. +execute_mod_tests() { + go_mod=$1; + mod_dir=$(dirname "$go_mod"); + mod_dir=${mod_dir:2}; # remove "./" prefix + root_dir=$(pwd); + + # TODO: in the future we will need to disable it once we go into multi module setup, because + # we will have cross module dependencies. + if [ -n "$GIT_DIFF" ] && ! grep $mod_dir <<< $GIT_DIFF; then + echo ">>> ignoring module $mod_dir - no changes in the module"; + return; + fi; + + echo ">>> running $go_mod tests" + cd $mod_dir; + go test -mod=readonly -timeout 30m -coverprofile=${root_dir}/${coverage_file}.tmp -covermode=atomic -tags='norace ledger test_ledger_mock' ./... + local ret=$? + echo "test return: " $ret; + cd -; + # strip mode statement + tail -n +1 ${coverage_file}.tmp >> ${coverage_file} + rm ${coverage_file}.tmp; + return $ret; +} + +# GIT_DIFF=`git status --porcelain` + +echo "GIT_DIFF: " $GIT_DIFF + +coverage_file=coverage-go-submod-profile.out +return_val=0; + +for f in $(find -name go.mod -not -path "./go.mod"); do + execute_mod_tests $f; + if [[ $? -ne 0 ]] ; then + return_val=2; + fi; +done + +exit $return_val; diff --git a/server/rosetta/client_online.go b/server/rosetta/client_online.go index 177c01131810..f8396aab3a43 100644 --- a/server/rosetta/client_online.go +++ b/server/rosetta/client_online.go @@ -507,7 +507,7 @@ func extractInitialHeightFromGenesisChunk(genesisChunk string) (int64, error) { return 0, err } - re, err := regexp.Compile("\"initial_height\":\"(\\d+)\"") + re, err := regexp.Compile("\"initial_height\":\"(\\d+)\"") //nolint:gocritic if err != nil { return 0, err } diff --git a/server/rosetta/lib/internal/service/online.go b/server/rosetta/lib/internal/service/online.go index c4315417267b..5cf3331273b1 100644 --- a/server/rosetta/lib/internal/service/online.go +++ b/server/rosetta/lib/internal/service/online.go @@ -51,7 +51,7 @@ func (o OnlineNetwork) AccountCoins(_ context.Context, _ *types.AccountCoinsRequ // networkOptionsFromClient builds network options given the client func networkOptionsFromClient(client crgtypes.Client, genesisBlock *types.BlockIdentifier) *types.NetworkOptionsResponse { - var tsi *int64 = nil + var tsi *int64 if genesisBlock != nil { tsi = &(genesisBlock.Index) } diff --git a/simapp/simd/cmd/genaccounts.go b/simapp/simd/cmd/genaccounts.go index 9e586943a2f3..c3340dccad18 100644 --- a/simapp/simd/cmd/genaccounts.go +++ b/simapp/simd/cmd/genaccounts.go @@ -39,7 +39,6 @@ contain valid denominations. Accounts may optionally be supplied with vesting pa Args: cobra.ExactArgs(2), RunE: func(cmd *cobra.Command, args []string) error { clientCtx := client.GetClientContextFromCmd(cmd) - serverCtx := server.GetServerContextFromCmd(cmd) config := serverCtx.Config diff --git a/snapshots/README.md b/snapshots/README.md new file mode 100644 index 000000000000..dfe2d66e7245 --- /dev/null +++ b/snapshots/README.md @@ -0,0 +1,236 @@ +# State Sync Snapshotting + +The `snapshots` package implements automatic support for Tendermint state sync +in Cosmos SDK-based applications. State sync allows a new node joining a network +to simply fetch a recent snapshot of the application state instead of fetching +and applying all historical blocks. This can reduce the time needed to join the +network by several orders of magnitude (e.g. weeks to minutes), but the node +will not contain historical data from previous heights. + +This document describes the Cosmos SDK implementation of the ABCI state sync +interface, for more information on Tendermint state sync in general see: + +* [Tendermint Core State Sync for Developers](https://medium.com/tendermint/tendermint-core-state-sync-for-developers-70a96ba3ee35) +* [ABCI State Sync Spec](https://docs.tendermint.com/master/spec/abci/apps.html#state-sync) +* [ABCI State Sync Method/Type Reference](https://docs.tendermint.com/master/spec/abci/abci.html#state-sync) + +## Overview + +For an overview of how Cosmos SDK state sync is set up and configured by +developers and end-users, see the +[Cosmos SDK State Sync Guide](https://blog.cosmos.network/cosmos-sdk-state-sync-guide-99e4cf43be2f). + +Briefly, the Cosmos SDK takes state snapshots at regular height intervals given +by `state-sync.snapshot-interval` and stores them as binary files in the +filesystem under `/data/snapshots/`, with metadata in a LevelDB database +`/data/snapshots/metadata.db`. The number of recent snapshots to keep are given by +`state-sync.snapshot-keep-recent`. + +Snapshots are taken asynchronously, i.e. new blocks will be applied concurrently +with snapshots being taken. This is possible because IAVL supports querying +immutable historical heights. However, this requires `state-sync.snapshot-interval` +to be a multiple of `pruning-keep-every`, to prevent a height from being removed +while it is being snapshotted. + +When a remote node is state syncing, Tendermint calls the ABCI method +`ListSnapshots` to list available local snapshots and `LoadSnapshotChunk` to +load a binary snapshot chunk. When the local node is being state synced, +Tendermint calls `OfferSnapshot` to offer a discovered remote snapshot to the +local application and `ApplySnapshotChunk` to apply a binary snapshot chunk to +the local application. See the resources linked above for more details on these +methods and how Tendermint performs state sync. + +The Cosmos SDK does not currently do any incremental verification of snapshots +during restoration, i.e. only after the entire snapshot has been restored will +Tendermint compare the app hash against the trusted hash from the chain. Cosmos +SDK snapshots and chunks do contain hashes as checksums to guard against IO +corruption and non-determinism, but these are not tied to the chain state and +can be trivially forged by an adversary. This was considered out of scope for +the initial implementation, but can be added later without changes to the +ABCI state sync protocol. + +## Snapshot Metadata + +The ABCI Protobuf type for a snapshot is listed below (refer to the ABCI spec +for field details): + +```protobuf +message Snapshot { + uint64 height = 1; // The height at which the snapshot was taken + uint32 format = 2; // The application-specific snapshot format + uint32 chunks = 3; // Number of chunks in the snapshot + bytes hash = 4; // Arbitrary snapshot hash, equal only if identical + bytes metadata = 5; // Arbitrary application metadata +} +``` + +Because the `metadata` field is application-specific, the Cosmos SDK uses a +similar type `cosmos.base.snapshots.v1beta1.Snapshot` with its own metadata +representation: + +```protobuf +// Snapshot contains Tendermint state sync snapshot info. +message Snapshot { + uint64 height = 1; + uint32 format = 2; + uint32 chunks = 3; + bytes hash = 4; + Metadata metadata = 5 [(gogoproto.nullable) = false]; +} + +// Metadata contains SDK-specific snapshot metadata. +message Metadata { + repeated bytes chunk_hashes = 1; // SHA-256 chunk hashes +} +``` + +The `format` is currently `1`, defined in `snapshots.types.CurrentFormat`. This +must be increased whenever the binary snapshot format changes, and it may be +useful to support past formats in newer versions. + +The `hash` is a SHA-256 hash of the entire binary snapshot, used to guard +against IO corruption and non-determinism across nodes. Note that this is not +tied to the chain state, and can be trivially forged (but Tendermint will always +compare the final app hash against the chain app hash). Similarly, the +`chunk_hashes` are SHA-256 checksums of each binary chunk. + +The `metadata` field is Protobuf-serialized before it is placed into the ABCI +snapshot. + +## Snapshot Format + +The current version `1` snapshot format is a zlib-compressed, length-prefixed +Protobuf stream of `cosmos.base.store.v1beta1.SnapshotItem` messages, split into +chunks at exact 10 MB byte boundaries. + +```protobuf +// SnapshotItem is an item contained in a rootmulti.Store snapshot. +message SnapshotItem { + // item is the specific type of snapshot item. + oneof item { + SnapshotStoreItem store = 1; + SnapshotIAVLItem iavl = 2 [(gogoproto.customname) = "IAVL"]; + } +} + +// SnapshotStoreItem contains metadata about a snapshotted store. +message SnapshotStoreItem { + string name = 1; +} + +// SnapshotIAVLItem is an exported IAVL node. +message SnapshotIAVLItem { + bytes key = 1; + bytes value = 2; + int64 version = 3; + int32 height = 4; +} +``` + +Snapshots are generated by `rootmulti.Store.Snapshot()` as follows: + +1. Set up a `protoio.NewDelimitedWriter` that writes length-prefixed serialized + `SnapshotItem` Protobuf messages. + 1. Iterate over each IAVL store in lexicographical order by store name. + 2. Emit a `SnapshotStoreItem` containing the store name. + 3. Start an IAVL export for the store using + [`iavl.ImmutableTree.Export()`](https://pkg.go.dev/github.com/tendermint/iavl#ImmutableTree.Export). + 4. Iterate over each IAVL node. + 5. Emit a `SnapshotIAVLItem` for the IAVL node. +2. Pass the serialized Protobuf output stream to a zlib compression writer. +3. Split the zlib output stream into chunks at exactly every 10th megabyte. + +Snapshots are restored via `rootmulti.Store.Restore()` as the inverse of the above, using +[`iavl.MutableTree.Import()`](https://pkg.go.dev/github.com/tendermint/iavl#MutableTree.Import) +to reconstruct each IAVL tree. + +## Snapshot Storage + +Snapshot storage is managed by `snapshots.Store`, with metadata in a `db.DB` +database and binary chunks in the filesystem. Note that this is only used to +store locally taken snapshots that are being offered to other nodes. When the +local node is being state synced, Tendermint will take care of buffering and +storing incoming snapshot chunks before they are applied to the application. + +Metadata is generally stored in a LevelDB database at +`/data/snapshots/metadata.db`. It contains serialized +`cosmos.base.snapshots.v1beta1.Snapshot` Protobuf messages with a key given by +the concatenation of a key prefix, the big-endian height, and the big-endian +format. Chunk data is stored as regular files under +`/data/snapshots///`. + +The `snapshots.Store` API is based on streaming IO, and integrates easily with +the `snapshots.types.Snapshotter` snapshot/restore interface implemented by +`rootmulti.Store`. The `Store.Save()` method stores a snapshot given as a +`<- chan io.ReadCloser` channel of binary chunk streams, and `Store.Load()` loads +the snapshot as a channel of binary chunk streams -- the same stream types used +by `Snapshotter.Snapshot()` and `Snapshotter.Restore()` to take and restore +snapshots using streaming IO. + +The store also provides many other methods such as `List()` to list stored +snapshots, `LoadChunk()` to load a single snapshot chunk, and `Prune()` to prune +old snapshots. + +## Taking Snapshots + +`snapshots.Manager` is a high-level snapshot manager that integrates a +`snapshots.types.Snapshotter` (i.e. the `rootmulti.Store` snapshot +functionality) and a `snapshots.Store`, providing an API that maps easily onto +the ABCI state sync API. The `Manager` will also make sure only one operation +is in progress at a time, e.g. to prevent multiple snapshots being taken +concurrently. + +During `BaseApp.Commit`, once a state transition has been committed, the height +is checked against the `state-sync.snapshot-interval` setting. If the committed +height should be snapshotted, a goroutine `BaseApp.snapshot()` is spawned that +calls `snapshots.Manager.Create()` to create the snapshot. + +`Manager.Create()` will do some basic pre-flight checks, and then start +generating a snapshot by calling `rootmulti.Store.Snapshot()`. The chunk stream +is passed into `snapshots.Store.Save()`, which stores the chunks in the +filesystem and records the snapshot metadata in the snapshot database. + +Once the snapshot has been generated, `BaseApp.snapshot()` then removes any +old snapshots based on the `state-sync.snapshot-keep-recent` setting. + +## Serving Snapshots + +When a remote node is discovering snapshots for state sync, Tendermint will +call the `ListSnapshots` ABCI method to list the snapshots present on the +local node. This is dispatched to `snapshots.Manager.List()`, which in turn +dispatches to `snapshots.Store.List()`. + +When a remote node is fetching snapshot chunks during state sync, Tendermint +will call the `LoadSnapshotChunk` ABCI method to fetch a chunk from the local +node. This dispatches to `snapshots.Manager.LoadChunk()`, which in turn +dispatches to `snapshots.Store.LoadChunk()`. + +## Restoring Snapshots + +When the operator has configured the local Tendermint node to run state sync +(see the resources listed in the introduction for details on Tendermint state +sync), it will discover snapshots across the P2P network and offer their +metadata in turn to the local application via the `OfferSnapshot` ABCI call. + +`BaseApp.OfferSnapshot()` attempts to start a restore operation by calling +`snapshots.Manager.Restore()`. This may fail, e.g. if the snapshot format is +unknown (it may have been generated by a different version of the Cosmos SDK), +in which case Tendermint will offer other discovered snapshots. + +If the snapshot is accepted, `Manager.Restore()` will record that a restore +operation is in progress, and spawn a separate goroutine that runs a synchronous +`rootmulti.Store.Restore()` snapshot restoration which will be fed snapshot +chunks until it is complete. + +Tendermint will then start fetching and buffering chunks, providing them in +order via ABCI `ApplySnapshotChunk` calls. These dispatch to +`Manager.RestoreChunk()`, which passes the chunks to the ongoing restore +process, checking if errors have been encountered yet (e.g. due to checksum +mismatches or invalid IAVL data). Once the final chunk is passed, +`Manager.RestoreChunk()` will wait for the restore process to complete before +returning. + +Once the restore is completed, Tendermint will go on to call the `Info` ABCI +call to fetch the app hash, and compare this against the trusted chain app +hash at the snapshot height to verify the restored state. If it matches, +Tendermint goes on to process blocks. diff --git a/store/streaming/README.md b/store/streaming/README.md new file mode 100644 index 000000000000..46e343416a52 --- /dev/null +++ b/store/streaming/README.md @@ -0,0 +1,67 @@ +# State Streaming Service + +This package contains the constructors for the `StreamingService`s used to write state changes out from individual KVStores to a +file or stream, as described in [ADR-038](../../docs/architecture/adr-038-state-listening.md) and defined in [types/streaming.go](../../baseapp/streaming.go). +The child directories contain the implementations for specific output destinations. + +Currently, a `StreamingService` implementation that writes state changes out to files is supported, in the future support for additional +output destinations can be added. + +The `StreamingService` is configured from within an App using the `AppOptions` loaded from the app.toml file: + +```toml +[store] + streamers = [ # if len(streamers) > 0 we are streaming + "file", # name of the streaming service, used by constructor + ] + +[streamers] + [streamers.file] + keys = ["list", "of", "store", "keys", "we", "want", "to", "expose", "for", "this", "streaming", "service"] + write_dir = "path to the write directory" + prefix = "optional prefix to prepend to the generated file names" +``` + +`store.streamers` contains a list of the names of the `StreamingService` implementations to employ which are used by `ServiceTypeFromString` +to return the `ServiceConstructor` for that particular implementation: + +```go +listeners := cast.ToStringSlice(appOpts.Get("store.streamers")) +for _, listenerName := range listeners { + constructor, err := ServiceTypeFromString(listenerName) + if err != nil { + // handle error + } +} +``` + +`streamers` contains a mapping of the specific `StreamingService` implementation name to the configuration parameters for that specific service. +`streamers.x.keys` contains the list of `StoreKey` names for the KVStores to expose using this service and is required by every type of `StreamingService`. +In order to expose *all* KVStores, we can include `*` in this list. An empty list is equivalent to turning the service off. + +Additional configuration parameters are optional and specific to the implementation. +In the case of the file streaming service, `streamers.file.write_dir` contains the path to the +directory to write the files to, and `streamers.file.prefix` contains an optional prefix to prepend to the output files to prevent potential collisions +with other App `StreamingService` output files. + +The `ServiceConstructor` accepts `AppOptions`, the store keys collected using `streamers.x.keys`, a `BinaryMarshaller` and +returns a `StreamingService` implementation. The `AppOptions` are passed in to provide access to any implementation specific configuration options, +e.g. in the case of the file streaming service the `streamers.file.write_dir` and `streamers.file.prefix`. + +```go +streamingService, err := constructor(appOpts, exposeStoreKeys, appCodec) +if err != nil { + // handler error +} +``` + +The returned `StreamingService` is loaded into the BaseApp using the BaseApp's `SetStreamingService` method. +The `Stream` method is called on the service to begin the streaming process. Depending on the implementation this process +may be synchronous or asynchronous with the message processing of the state machine. + +```go +bApp.SetStreamingService(streamingService) +wg := new(sync.WaitGroup) +quitChan := make(chan struct{}) +streamingService.Stream(wg, quitChan) +``` diff --git a/store/streaming/file/README.md b/store/streaming/file/README.md new file mode 100644 index 000000000000..3e4a248e1a95 --- /dev/null +++ b/store/streaming/file/README.md @@ -0,0 +1,66 @@ +# File Streaming Service + +This pkg contains an implementation of the [StreamingService](../../../baseapp/streaming.go) that writes +the data stream out to files on the local filesystem. This process is performed synchronously with the message processing +of the state machine. + +## Configuration + +The `file.StreamingService` is configured from within an App using the `AppOptions` loaded from the app.toml file: + +```toml +[store] + streamers = [ # if len(streamers) > 0 we are streaming + "file", # name of the streaming service, used by constructor + ] + +[streamers] + [streamers.file] + keys = ["list", "of", "store", "keys", "we", "want", "to", "expose", "for", "this", "streaming", "service"] + write_dir = "path to the write directory" + prefix = "optional prefix to prepend to the generated file names" +``` + +We turn the service on by adding its name, "file", to `store.streamers`- the list of streaming services for this App to employ. + +In `streamers.file` we include three configuration parameters for the file streaming service: + +1. `streamers.x.keys` contains the list of `StoreKey` names for the KVStores to expose using this service. +In order to expose *all* KVStores, we can include `*` in this list. An empty list is equivalent to turning the service off. +2. `streamers.file.write_dir` contains the path to the directory to write the files to. +3. `streamers.file.prefix` contains an optional prefix to prepend to the output files to prevent potential collisions +with other App `StreamingService` output files. + +##### Encoding + +For each pair of `BeginBlock` requests and responses, a file is created and named `block-{N}-begin`, where N is the block number. +At the head of this file the length-prefixed protobuf encoded `BeginBlock` request is written. +At the tail of this file the length-prefixed protobuf encoded `BeginBlock` response is written. +In between these two encoded messages, the state changes that occurred due to the `BeginBlock` request are written chronologically as +a series of length-prefixed protobuf encoded `StoreKVPair`s representing `Set` and `Delete` operations within the KVStores the service +is configured to listen to. + +For each pair of `DeliverTx` requests and responses, a file is created and named `block-{N}-tx-{M}` where N is the block number and M +is the tx number in the block (i.e. 0, 1, 2...). +At the head of this file the length-prefixed protobuf encoded `DeliverTx` request is written. +At the tail of this file the length-prefixed protobuf encoded `DeliverTx` response is written. +In between these two encoded messages, the state changes that occurred due to the `DeliverTx` request are written chronologically as +a series of length-prefixed protobuf encoded `StoreKVPair`s representing `Set` and `Delete` operations within the KVStores the service +is configured to listen to. + +For each pair of `EndBlock` requests and responses, a file is created and named `block-{N}-end`, where N is the block number. +At the head of this file the length-prefixed protobuf encoded `EndBlock` request is written. +At the tail of this file the length-prefixed protobuf encoded `EndBlock` response is written. +In between these two encoded messages, the state changes that occurred due to the `EndBlock` request are written chronologically as +a series of length-prefixed protobuf encoded `StoreKVPair`s representing `Set` and `Delete` operations within the KVStores the service +is configured to listen to. + +##### Decoding + +To decode the files written in the above format we read all the bytes from a given file into memory and segment them into proto +messages based on the length-prefixing of each message. Once segmented, it is known that the first message is the ABCI request, +the last message is the ABCI response, and that every message in between is a `StoreKVPair`. This enables us to decode each segment into +the appropriate message type. + +The type of ABCI req/res, the block height, and the transaction index (where relevant) is known +from the file name, and the KVStore each `StoreKVPair` originates from is known since the `StoreKey` is included as a field in the proto message. diff --git a/store/v2/flat/store.go b/store/v2/flat/store.go new file mode 100644 index 000000000000..9076ec15f4c7 --- /dev/null +++ b/store/v2/flat/store.go @@ -0,0 +1,479 @@ +package flat + +import ( + "crypto/sha256" + "errors" + "fmt" + "io" + "math" + "sync" + + dbm "github.com/cosmos/cosmos-sdk/db" + "github.com/cosmos/cosmos-sdk/db/prefix" + abci "github.com/tendermint/tendermint/abci/types" + + util "github.com/cosmos/cosmos-sdk/internal" + "github.com/cosmos/cosmos-sdk/store/cachekv" + "github.com/cosmos/cosmos-sdk/store/listenkv" + "github.com/cosmos/cosmos-sdk/store/tracekv" + "github.com/cosmos/cosmos-sdk/store/types" + "github.com/cosmos/cosmos-sdk/store/v2/smt" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/kv" +) + +var ( + _ types.KVStore = (*Store)(nil) + _ types.CommitKVStore = (*Store)(nil) + _ types.Queryable = (*Store)(nil) +) + +var ( + merkleRootKey = []byte{0} // Key for root hash of Merkle tree + dataPrefix = []byte{1} // Prefix for state mappings + indexPrefix = []byte{2} // Prefix for Store reverse index + merkleNodePrefix = []byte{3} // Prefix for Merkle tree nodes + merkleValuePrefix = []byte{4} // Prefix for Merkle value mappings +) + +var ( + ErrVersionDoesNotExist = errors.New("version does not exist") + ErrMaximumHeight = errors.New("maximum block height reached") +) + +type StoreConfig struct { + // Version pruning options for backing DBs. + Pruning types.PruningOptions + // The backing DB to use for the state commitment Merkle tree data. + // If nil, Merkle data is stored in the state storage DB under a separate prefix. + MerkleDB dbm.DBConnection + InitialVersion uint64 +} + +// Store is a CommitKVStore which handles state storage and commitments as separate concerns, +// optionally using separate backing key-value DBs for each. +// Allows synchronized R/W access by locking. +type Store struct { + stateDB dbm.DBConnection + stateTxn dbm.DBReadWriter + dataTxn dbm.DBReadWriter + merkleTxn dbm.DBReadWriter + indexTxn dbm.DBReadWriter + // State commitment (SC) KV store for current version + merkleStore *smt.Store + + opts StoreConfig + mtx sync.RWMutex +} + +var DefaultStoreConfig = StoreConfig{Pruning: types.PruneDefault, MerkleDB: nil} + +// NewStore creates a new Store, or loads one if db contains existing data. +func NewStore(db dbm.DBConnection, opts StoreConfig) (ret *Store, err error) { + versions, err := db.Versions() + if err != nil { + return + } + loadExisting := false + // If the DB is not empty, attempt to load existing data + if saved := versions.Count(); saved != 0 { + if opts.InitialVersion != 0 && versions.Last() < opts.InitialVersion { + return nil, fmt.Errorf("latest saved version is less than initial version: %v < %v", + versions.Last(), opts.InitialVersion) + } + loadExisting = true + } + err = db.Revert() + if err != nil { + return + } + stateTxn := db.ReadWriter() + defer func() { + if err != nil { + err = util.CombineErrors(err, stateTxn.Discard(), "stateTxn.Discard also failed") + } + }() + merkleTxn := stateTxn + if opts.MerkleDB != nil { + var mversions dbm.VersionSet + mversions, err = opts.MerkleDB.Versions() + if err != nil { + return + } + // Version sets of each DB must match + if !versions.Equal(mversions) { + err = fmt.Errorf("Storage and Merkle DB have different version history") //nolint:stylecheck + return + } + err = opts.MerkleDB.Revert() + if err != nil { + return + } + merkleTxn = opts.MerkleDB.ReadWriter() + } + + var merkleStore *smt.Store + if loadExisting { + var root []byte + root, err = stateTxn.Get(merkleRootKey) + if err != nil { + return + } + if root == nil { + err = fmt.Errorf("could not get root of SMT") + return + } + merkleStore = loadSMT(merkleTxn, root) + } else { + merkleNodes := prefix.NewPrefixReadWriter(merkleTxn, merkleNodePrefix) + merkleValues := prefix.NewPrefixReadWriter(merkleTxn, merkleValuePrefix) + merkleStore = smt.NewStore(merkleNodes, merkleValues) + } + return &Store{ + stateDB: db, + stateTxn: stateTxn, + dataTxn: prefix.NewPrefixReadWriter(stateTxn, dataPrefix), + indexTxn: prefix.NewPrefixReadWriter(stateTxn, indexPrefix), + merkleTxn: merkleTxn, + merkleStore: merkleStore, + opts: opts, + }, nil +} + +func (s *Store) Close() error { + err := s.stateTxn.Discard() + if s.opts.MerkleDB != nil { + err = util.CombineErrors(err, s.merkleTxn.Discard(), "merkleTxn.Discard also failed") + } + return err +} + +// Get implements KVStore. +func (s *Store) Get(key []byte) []byte { + s.mtx.RLock() + defer s.mtx.RUnlock() + + val, err := s.dataTxn.Get(key) + if err != nil { + panic(err) + } + return val +} + +// Has implements KVStore. +func (s *Store) Has(key []byte) bool { + s.mtx.RLock() + defer s.mtx.RUnlock() + + has, err := s.dataTxn.Has(key) + if err != nil { + panic(err) + } + return has +} + +// Set implements KVStore. +func (s *Store) Set(key, value []byte) { + s.mtx.Lock() + defer s.mtx.Unlock() + + err := s.dataTxn.Set(key, value) + if err != nil { + panic(err) + } + s.merkleStore.Set(key, value) + khash := sha256.Sum256(key) + err = s.indexTxn.Set(khash[:], key) + if err != nil { + panic(err) + } +} + +// Delete implements KVStore. +func (s *Store) Delete(key []byte) { + khash := sha256.Sum256(key) + s.mtx.Lock() + defer s.mtx.Unlock() + + s.merkleStore.Delete(key) + _ = s.indexTxn.Delete(khash[:]) + _ = s.dataTxn.Delete(key) +} + +type contentsIterator struct { + dbm.Iterator + valid bool +} + +func newIterator(source dbm.Iterator) *contentsIterator { + ret := &contentsIterator{Iterator: source} + ret.Next() + return ret +} + +func (it *contentsIterator) Next() { it.valid = it.Iterator.Next() } +func (it *contentsIterator) Valid() bool { return it.valid } + +// Iterator implements KVStore. +func (s *Store) Iterator(start, end []byte) types.Iterator { + iter, err := s.dataTxn.Iterator(start, end) + if err != nil { + panic(err) + } + return newIterator(iter) +} + +// ReverseIterator implements KVStore. +func (s *Store) ReverseIterator(start, end []byte) types.Iterator { + iter, err := s.dataTxn.ReverseIterator(start, end) + if err != nil { + panic(err) + } + return newIterator(iter) +} + +// GetStoreType implements Store. +func (s *Store) GetStoreType() types.StoreType { + return types.StoreTypeDecoupled +} + +// Commit implements Committer. +func (s *Store) Commit() types.CommitID { + versions, err := s.stateDB.Versions() + if err != nil { + panic(err) + } + target := versions.Last() + 1 + if target > math.MaxInt64 { + panic(ErrMaximumHeight) + } + // Fast forward to initialversion if needed + if s.opts.InitialVersion != 0 && target < s.opts.InitialVersion { + target = s.opts.InitialVersion + } + cid, err := s.commit(target) + if err != nil { + panic(err) + } + + previous := cid.Version - 1 + if s.opts.Pruning.KeepEvery != 1 && s.opts.Pruning.Interval != 0 && cid.Version%int64(s.opts.Pruning.Interval) == 0 { + // The range of newly prunable versions + lastPrunable := previous - int64(s.opts.Pruning.KeepRecent) + firstPrunable := lastPrunable - int64(s.opts.Pruning.Interval) + for version := firstPrunable; version <= lastPrunable; version++ { + if s.opts.Pruning.KeepEvery == 0 || version%int64(s.opts.Pruning.KeepEvery) != 0 { + s.stateDB.DeleteVersion(uint64(version)) + if s.opts.MerkleDB != nil { + s.opts.MerkleDB.DeleteVersion(uint64(version)) + } + } + } + } + return *cid +} + +func (s *Store) commit(target uint64) (id *types.CommitID, err error) { + root := s.merkleStore.Root() + err = s.stateTxn.Set(merkleRootKey, root) + if err != nil { + return + } + err = s.stateTxn.Commit() + if err != nil { + return + } + defer func() { + if err != nil { + err = util.CombineErrors(err, s.stateDB.Revert(), "stateDB.Revert also failed") + } + }() + err = s.stateDB.SaveVersion(target) + if err != nil { + return + } + + stateTxn := s.stateDB.ReadWriter() + defer func() { + if err != nil { + err = util.CombineErrors(err, stateTxn.Discard(), "stateTxn.Discard also failed") + } + }() + merkleTxn := stateTxn + + // If DBs are not separate, Merkle state has been commmitted & snapshotted + if s.opts.MerkleDB != nil { + defer func() { + if err != nil { + if delerr := s.stateDB.DeleteVersion(target); delerr != nil { + err = fmt.Errorf("%w: commit rollback failed: %v", err, delerr) + } + } + }() + + err = s.merkleTxn.Commit() + if err != nil { + return + } + defer func() { + if err != nil { + err = util.CombineErrors(err, s.opts.MerkleDB.Revert(), "merkleDB.Revert also failed") + } + }() + + err = s.opts.MerkleDB.SaveVersion(target) + if err != nil { + return + } + merkleTxn = s.opts.MerkleDB.ReadWriter() + } + + s.stateTxn = stateTxn + s.dataTxn = prefix.NewPrefixReadWriter(stateTxn, dataPrefix) + s.indexTxn = prefix.NewPrefixReadWriter(stateTxn, indexPrefix) + s.merkleTxn = merkleTxn + s.merkleStore = loadSMT(merkleTxn, root) + + return &types.CommitID{Version: int64(target), Hash: root}, nil +} + +// LastCommitID implements Committer. +func (s *Store) LastCommitID() types.CommitID { + versions, err := s.stateDB.Versions() + if err != nil { + panic(err) + } + last := versions.Last() + if last == 0 { + return types.CommitID{} + } + // Latest Merkle root is the one currently stored + hash, err := s.stateTxn.Get(merkleRootKey) + if err != nil { + panic(err) + } + return types.CommitID{Version: int64(last), Hash: hash} +} + +func (s *Store) GetPruning() types.PruningOptions { return s.opts.Pruning } +func (s *Store) SetPruning(po types.PruningOptions) { s.opts.Pruning = po } + +// Query implements ABCI interface, allows queries. +// +// by default we will return from (latest height -1), +// as we will have merkle proofs immediately (header height = data height + 1) +// If latest-1 is not present, use latest (which must be present) +// if you care to have the latest data to see a tx results, you must +// explicitly set the height you want to see +func (s *Store) Query(req abci.RequestQuery) (res abci.ResponseQuery) { + if len(req.Data) == 0 { + return sdkerrors.QueryResult(sdkerrors.Wrap(sdkerrors.ErrTxDecode, "query cannot be zero length"), false) + } + + // if height is 0, use the latest height + height := req.Height + if height == 0 { + versions, err := s.stateDB.Versions() + if err != nil { + return sdkerrors.QueryResult(errors.New("failed to get version info"), false) + } + latest := versions.Last() + if versions.Exists(latest - 1) { + height = int64(latest - 1) + } else { + height = int64(latest) + } + } + if height < 0 { + return sdkerrors.QueryResult(fmt.Errorf("height overflow: %v", height), false) + } + res.Height = height + + switch req.Path { + case "/key": + var err error + res.Key = req.Data // data holds the key bytes + + dbr, err := s.stateDB.ReaderAt(uint64(height)) + if err != nil { + if errors.Is(err, dbm.ErrVersionDoesNotExist) { + err = sdkerrors.ErrInvalidHeight + } + return sdkerrors.QueryResult(err, false) + } + defer dbr.Discard() + contents := prefix.NewPrefixReader(dbr, dataPrefix) + res.Value, err = contents.Get(res.Key) + if err != nil { + return sdkerrors.QueryResult(err, false) + } + if !req.Prove { + break + } + merkleView := dbr + if s.opts.MerkleDB != nil { + merkleView, err = s.opts.MerkleDB.ReaderAt(uint64(height)) + if err != nil { + return sdkerrors.QueryResult( + fmt.Errorf("version exists in state DB but not Merkle DB: %v", height), false) + } + defer merkleView.Discard() + } + root, err := dbr.Get(merkleRootKey) + if err != nil { + return sdkerrors.QueryResult(err, false) + } + if root == nil { + return sdkerrors.QueryResult(errors.New("Merkle root hash not found"), false) //nolint:stylecheck + } + merkleStore := loadSMT(dbm.ReaderAsReadWriter(merkleView), root) + res.ProofOps, err = merkleStore.GetProof(res.Key) + if err != nil { + return sdkerrors.QueryResult(fmt.Errorf("Merkle proof creation failed for key: %v", res.Key), false) //nolint:stylecheck + } + + case "/subspace": + pairs := kv.Pairs{ + Pairs: make([]kv.Pair, 0), + } + + subspace := req.Data + res.Key = subspace + + iterator := s.Iterator(subspace, types.PrefixEndBytes(subspace)) + for ; iterator.Valid(); iterator.Next() { + pairs.Pairs = append(pairs.Pairs, kv.Pair{Key: iterator.Key(), Value: iterator.Value()}) + } + iterator.Close() + + bz, err := pairs.Marshal() + if err != nil { + panic(fmt.Errorf("failed to marshal KV pairs: %w", err)) + } + + res.Value = bz + + default: + return sdkerrors.QueryResult(sdkerrors.Wrapf(sdkerrors.ErrUnknownRequest, "unexpected query path: %v", req.Path), false) + } + + return res +} + +func loadSMT(merkleTxn dbm.DBReadWriter, root []byte) *smt.Store { + merkleNodes := prefix.NewPrefixReadWriter(merkleTxn, merkleNodePrefix) + merkleValues := prefix.NewPrefixReadWriter(merkleTxn, merkleValuePrefix) + return smt.LoadStore(merkleNodes, merkleValues, root) +} + +func (s *Store) CacheWrap() types.CacheWrap { + return cachekv.NewStore(s) +} + +func (s *Store) CacheWrapWithTrace(w io.Writer, tc types.TraceContext) types.CacheWrap { + return cachekv.NewStore(tracekv.NewStore(s, w, tc)) +} + +func (s *Store) CacheWrapWithListeners(storeKey types.StoreKey, listeners []types.WriteListener) types.CacheWrap { + return cachekv.NewStore(listenkv.NewStore(s, storeKey, listeners)) +} diff --git a/store/v2/smt/store.go b/store/v2/smt/store.go new file mode 100644 index 000000000000..ce4130174337 --- /dev/null +++ b/store/v2/smt/store.go @@ -0,0 +1,99 @@ +package smt + +import ( + "crypto/sha256" + "errors" + + "github.com/cosmos/cosmos-sdk/store/types" + tmcrypto "github.com/tendermint/tendermint/proto/tendermint/crypto" + + "github.com/lazyledger/smt" +) + +var ( + _ types.BasicKVStore = (*Store)(nil) +) + +var ( + errKeyEmpty = errors.New("key is empty or nil") + errValueNil = errors.New("value is nil") +) + +// Store Implements types.KVStore and CommitKVStore. +type Store struct { + tree *smt.SparseMerkleTree +} + +func NewStore(nodes, values smt.MapStore) *Store { + return &Store{ + tree: smt.NewSparseMerkleTree(nodes, values, sha256.New()), + } +} + +func LoadStore(nodes, values smt.MapStore, root []byte) *Store { + return &Store{ + tree: smt.ImportSparseMerkleTree(nodes, values, sha256.New(), root), + } +} + +func (s *Store) GetProof(key []byte) (*tmcrypto.ProofOps, error) { + proof, err := s.tree.Prove(key) + if err != nil { + return nil, err + } + op := NewProofOp(s.tree.Root(), key, SHA256, proof) + return &tmcrypto.ProofOps{Ops: []tmcrypto.ProofOp{op.ProofOp()}}, nil +} + +func (s *Store) Root() []byte { return s.tree.Root() } + +// BasicKVStore interface below: + +// Get returns nil iff key doesn't exist. Panics on nil key. +func (s *Store) Get(key []byte) []byte { + if len(key) == 0 { + panic(errKeyEmpty) + } + val, err := s.tree.Get(key) + if err != nil { + panic(err) + } + return val +} + +// Has checks if a key exists. Panics on nil key. +func (s *Store) Has(key []byte) bool { + if len(key) == 0 { + panic(errKeyEmpty) + } + has, err := s.tree.Has(key) + if err != nil { + panic(err) + } + return has +} + +// Set sets the key. Panics on nil key or value. +func (s *Store) Set(key []byte, value []byte) { + if len(key) == 0 { + panic(errKeyEmpty) + } + if value == nil { + panic(errValueNil) + } + _, err := s.tree.Update(key, value) + if err != nil { + panic(err) + } +} + +// Delete deletes the key. Panics on nil key. +func (s *Store) Delete(key []byte) { + if len(key) == 0 { + panic(errKeyEmpty) + } + _, err := s.tree.Delete(key) + if err != nil { + panic(err) + } +} diff --git a/types/denom.go b/types/denom.go index 160e2806e1df..0e8d716a2670 100644 --- a/types/denom.go +++ b/types/denom.go @@ -9,7 +9,7 @@ import ( var denomUnits = map[string]Dec{} // baseDenom is the denom of smallest unit registered -var baseDenom string = "" +var baseDenom string // RegisterDenom registers a denomination with a corresponding unit. If the // denomination is already registered, an error will be returned. diff --git a/x/auth/ante/sigverify.go b/x/auth/ante/sigverify.go index 8ff8ee8d98d7..ad7f1024440c 100644 --- a/x/auth/ante/sigverify.go +++ b/x/auth/ante/sigverify.go @@ -228,7 +228,12 @@ func OnlyLegacyAminoSigners(sigData signing.SignatureData) bool { } } +<<<<<<< HEAD:x/auth/ante/sigverify.go func (svd SigVerificationDecorator) AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (newCtx sdk.Context, err error) { +======= +func (svd sigVerificationTxHandler) sigVerify(ctx context.Context, tx sdk.Tx, isReCheckTx, simulate bool) error { + sdkCtx := sdk.UnwrapSDKContext(ctx) +>>>>>>> 479485f95 (style: lint go and markdown (#10060)):x/auth/middleware/sigverify.go // no need to verify signatures on recheck tx if ctx.IsReCheckTx() { return next(ctx, tx, simulate) @@ -253,7 +258,11 @@ func (svd SigVerificationDecorator) AnteHandle(ctx sdk.Context, tx sdk.Tx, simul } for i, sig := range sigs { +<<<<<<< HEAD:x/auth/ante/sigverify.go acc, err := GetSignerAcc(ctx, svd.ak, signerAddrs[i]) +======= + acc, err := GetSignerAcc(sdkCtx, svd.ak, signerAddrs[i]) +>>>>>>> 479485f95 (style: lint go and markdown (#10060)):x/auth/middleware/sigverify.go if err != nil { return ctx, err } diff --git a/x/auth/middleware/basic.go b/x/auth/middleware/basic.go new file mode 100644 index 000000000000..1bfd98d868d6 --- /dev/null +++ b/x/auth/middleware/basic.go @@ -0,0 +1,358 @@ +package middleware + +import ( + "context" + + "github.com/cosmos/cosmos-sdk/codec/legacy" + "github.com/cosmos/cosmos-sdk/crypto/keys/multisig" + cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" + sdk "github.com/cosmos/cosmos-sdk/types" + sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" + "github.com/cosmos/cosmos-sdk/types/tx" + "github.com/cosmos/cosmos-sdk/types/tx/signing" + "github.com/cosmos/cosmos-sdk/x/auth/migrations/legacytx" + authsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" + abci "github.com/tendermint/tendermint/abci/types" +) + +type validateBasicTxHandler struct { + next tx.Handler +} + +// ValidateBasicMiddleware will call tx.ValidateBasic, msg.ValidateBasic(for each msg inside tx) +// and return any non-nil error. +// If ValidateBasic passes, middleware calls next middleware in chain. Note, +// validateBasicTxHandler will not get executed on ReCheckTx since it +// is not dependent on application state. +func ValidateBasicMiddleware(txh tx.Handler) tx.Handler { + return validateBasicTxHandler{ + next: txh, + } +} + +var _ tx.Handler = validateBasicTxHandler{} + +// validateBasicTxMsgs executes basic validator calls for messages. +func validateBasicTxMsgs(msgs []sdk.Msg) error { + if len(msgs) == 0 { + return sdkerrors.Wrap(sdkerrors.ErrInvalidRequest, "must contain at least one message") + } + + for _, msg := range msgs { + err := msg.ValidateBasic() + if err != nil { + return err + } + } + + return nil +} + +// CheckTx implements tx.Handler.CheckTx. +func (txh validateBasicTxHandler) CheckTx(ctx context.Context, tx sdk.Tx, req abci.RequestCheckTx) (abci.ResponseCheckTx, error) { + // no need to validate basic on recheck tx, call next middleware + if req.Type == abci.CheckTxType_Recheck { + return txh.next.CheckTx(ctx, tx, req) + } + + if err := validateBasicTxMsgs(tx.GetMsgs()); err != nil { + return abci.ResponseCheckTx{}, err + } + + if err := tx.ValidateBasic(); err != nil { + return abci.ResponseCheckTx{}, err + } + + return txh.next.CheckTx(ctx, tx, req) +} + +// DeliverTx implements tx.Handler.DeliverTx. +func (txh validateBasicTxHandler) DeliverTx(ctx context.Context, tx sdk.Tx, req abci.RequestDeliverTx) (abci.ResponseDeliverTx, error) { + if err := tx.ValidateBasic(); err != nil { + return abci.ResponseDeliverTx{}, err + } + + if err := validateBasicTxMsgs(tx.GetMsgs()); err != nil { + return abci.ResponseDeliverTx{}, err + } + + return txh.next.DeliverTx(ctx, tx, req) +} + +// SimulateTx implements tx.Handler.SimulateTx. +func (txh validateBasicTxHandler) SimulateTx(ctx context.Context, sdkTx sdk.Tx, req tx.RequestSimulateTx) (tx.ResponseSimulateTx, error) { + if err := sdkTx.ValidateBasic(); err != nil { + return tx.ResponseSimulateTx{}, err + } + + if err := validateBasicTxMsgs(sdkTx.GetMsgs()); err != nil { + return tx.ResponseSimulateTx{}, err + } + + return txh.next.SimulateTx(ctx, sdkTx, req) +} + +var _ tx.Handler = txTimeoutHeightTxHandler{} + +type txTimeoutHeightTxHandler struct { + next tx.Handler +} + +// TxTimeoutHeightMiddleware defines a middleware that checks for a +// tx height timeout. +func TxTimeoutHeightMiddleware(txh tx.Handler) tx.Handler { + return txTimeoutHeightTxHandler{ + next: txh, + } +} + +func checkTimeout(ctx context.Context, tx sdk.Tx) error { + sdkCtx := sdk.UnwrapSDKContext(ctx) + timeoutTx, ok := tx.(sdk.TxWithTimeoutHeight) + if !ok { + return sdkerrors.Wrap(sdkerrors.ErrTxDecode, "expected tx to implement TxWithTimeoutHeight") + } + + timeoutHeight := timeoutTx.GetTimeoutHeight() + if timeoutHeight > 0 && uint64(sdkCtx.BlockHeight()) > timeoutHeight { + return sdkerrors.Wrapf( + sdkerrors.ErrTxTimeoutHeight, "block height: %d, timeout height: %d", sdkCtx.BlockHeight(), timeoutHeight, + ) + } + + return nil +} + +// CheckTx implements tx.Handler.CheckTx. +func (txh txTimeoutHeightTxHandler) CheckTx(ctx context.Context, tx sdk.Tx, req abci.RequestCheckTx) (abci.ResponseCheckTx, error) { + if err := checkTimeout(ctx, tx); err != nil { + return abci.ResponseCheckTx{}, err + } + + return txh.next.CheckTx(ctx, tx, req) +} + +// DeliverTx implements tx.Handler.DeliverTx. +func (txh txTimeoutHeightTxHandler) DeliverTx(ctx context.Context, tx sdk.Tx, req abci.RequestDeliverTx) (abci.ResponseDeliverTx, error) { + if err := checkTimeout(ctx, tx); err != nil { + return abci.ResponseDeliverTx{}, err + } + + return txh.next.DeliverTx(ctx, tx, req) +} + +// SimulateTx implements tx.Handler.SimulateTx. +func (txh txTimeoutHeightTxHandler) SimulateTx(ctx context.Context, sdkTx sdk.Tx, req tx.RequestSimulateTx) (tx.ResponseSimulateTx, error) { + if err := checkTimeout(ctx, sdkTx); err != nil { + return tx.ResponseSimulateTx{}, err + } + + return txh.next.SimulateTx(ctx, sdkTx, req) +} + +type validateMemoTxHandler struct { + ak AccountKeeper + next tx.Handler +} + +// ValidateMemoMiddleware will validate memo given the parameters passed in +// If memo is too large middleware returns with error, otherwise call next middleware +// CONTRACT: Tx must implement TxWithMemo interface +func ValidateMemoMiddleware(ak AccountKeeper) tx.Middleware { + return func(txHandler tx.Handler) tx.Handler { + return validateMemoTxHandler{ + ak: ak, + next: txHandler, + } + } +} + +var _ tx.Handler = validateMemoTxHandler{} + +func (vmm validateMemoTxHandler) checkForValidMemo(ctx context.Context, tx sdk.Tx) error { + sdkCtx := sdk.UnwrapSDKContext(ctx) + memoTx, ok := tx.(sdk.TxWithMemo) + if !ok { + return sdkerrors.Wrap(sdkerrors.ErrTxDecode, "invalid transaction type") + } + + params := vmm.ak.GetParams(sdkCtx) + + memoLength := len(memoTx.GetMemo()) + if uint64(memoLength) > params.MaxMemoCharacters { + return sdkerrors.Wrapf(sdkerrors.ErrMemoTooLarge, + "maximum number of characters is %d but received %d characters", + params.MaxMemoCharacters, memoLength, + ) + } + + return nil +} + +// CheckTx implements tx.Handler.CheckTx method. +func (vmm validateMemoTxHandler) CheckTx(ctx context.Context, tx sdk.Tx, req abci.RequestCheckTx) (abci.ResponseCheckTx, error) { + if err := vmm.checkForValidMemo(ctx, tx); err != nil { + return abci.ResponseCheckTx{}, err + } + + return vmm.next.CheckTx(ctx, tx, req) +} + +// DeliverTx implements tx.Handler.DeliverTx method. +func (vmm validateMemoTxHandler) DeliverTx(ctx context.Context, tx sdk.Tx, req abci.RequestDeliverTx) (abci.ResponseDeliverTx, error) { + if err := vmm.checkForValidMemo(ctx, tx); err != nil { + return abci.ResponseDeliverTx{}, err + } + + return vmm.next.DeliverTx(ctx, tx, req) +} + +// SimulateTx implements tx.Handler.SimulateTx method. +func (vmm validateMemoTxHandler) SimulateTx(ctx context.Context, sdkTx sdk.Tx, req tx.RequestSimulateTx) (tx.ResponseSimulateTx, error) { + if err := vmm.checkForValidMemo(ctx, sdkTx); err != nil { + return tx.ResponseSimulateTx{}, err + } + + return vmm.next.SimulateTx(ctx, sdkTx, req) +} + +var _ tx.Handler = consumeTxSizeGasTxHandler{} + +type consumeTxSizeGasTxHandler struct { + ak AccountKeeper + next tx.Handler +} + +// ConsumeTxSizeGasMiddleware will take in parameters and consume gas proportional +// to the size of tx before calling next middleware. Note, the gas costs will be +// slightly over estimated due to the fact that any given signing account may need +// to be retrieved from state. +// +// CONTRACT: If simulate=true, then signatures must either be completely filled +// in or empty. +// CONTRACT: To use this middleware, signatures of transaction must be represented +// as legacytx.StdSignature otherwise simulate mode will incorrectly estimate gas cost. +func ConsumeTxSizeGasMiddleware(ak AccountKeeper) tx.Middleware { + return func(txHandler tx.Handler) tx.Handler { + return consumeTxSizeGasTxHandler{ + ak: ak, + next: txHandler, + } + } +} + +func (cgts consumeTxSizeGasTxHandler) simulateSigGasCost(ctx context.Context, tx sdk.Tx) error { + sdkCtx := sdk.UnwrapSDKContext(ctx) + params := cgts.ak.GetParams(sdkCtx) + + sigTx, ok := tx.(authsigning.SigVerifiableTx) + if !ok { + return sdkerrors.Wrap(sdkerrors.ErrTxDecode, "invalid tx type") + } + + // in simulate mode, each element should be a nil signature + sigs, err := sigTx.GetSignaturesV2() + if err != nil { + return err + } + n := len(sigs) + + for i, signer := range sigTx.GetSigners() { + // if signature is already filled in, no need to simulate gas cost + if i < n && !isIncompleteSignature(sigs[i].Data) { + continue + } + + var pubkey cryptotypes.PubKey + + acc := cgts.ak.GetAccount(sdkCtx, signer) + + // use placeholder simSecp256k1Pubkey if sig is nil + if acc == nil || acc.GetPubKey() == nil { + pubkey = simSecp256k1Pubkey + } else { + pubkey = acc.GetPubKey() + } + + // use stdsignature to mock the size of a full signature + simSig := legacytx.StdSignature{ //nolint:staticcheck // this will be removed when proto is ready + Signature: simSecp256k1Sig[:], + PubKey: pubkey, + } + + sigBz := legacy.Cdc.MustMarshal(simSig) + cost := sdk.Gas(len(sigBz) + 6) + + // If the pubkey is a multi-signature pubkey, then we estimate for the maximum + // number of signers. + if _, ok := pubkey.(*multisig.LegacyAminoPubKey); ok { + cost *= params.TxSigLimit + } + + sdkCtx.GasMeter().ConsumeGas(params.TxSizeCostPerByte*cost, "txSize") + } + + return nil +} + +func (cgts consumeTxSizeGasTxHandler) consumeTxSizeGas(ctx context.Context, _ sdk.Tx, txBytes []byte, simulate bool) error { + sdkCtx := sdk.UnwrapSDKContext(ctx) + params := cgts.ak.GetParams(sdkCtx) + sdkCtx.GasMeter().ConsumeGas(params.TxSizeCostPerByte*sdk.Gas(len(txBytes)), "txSize") + + return nil +} + +// CheckTx implements tx.Handler.CheckTx. +func (cgts consumeTxSizeGasTxHandler) CheckTx(ctx context.Context, tx sdk.Tx, req abci.RequestCheckTx) (abci.ResponseCheckTx, error) { + if err := cgts.consumeTxSizeGas(ctx, tx, req.GetTx(), false); err != nil { + return abci.ResponseCheckTx{}, err + } + + return cgts.next.CheckTx(ctx, tx, req) +} + +// DeliverTx implements tx.Handler.DeliverTx. +func (cgts consumeTxSizeGasTxHandler) DeliverTx(ctx context.Context, tx sdk.Tx, req abci.RequestDeliverTx) (abci.ResponseDeliverTx, error) { + if err := cgts.consumeTxSizeGas(ctx, tx, req.GetTx(), false); err != nil { + return abci.ResponseDeliverTx{}, err + } + + return cgts.next.DeliverTx(ctx, tx, req) +} + +// SimulateTx implements tx.Handler.SimulateTx. +func (cgts consumeTxSizeGasTxHandler) SimulateTx(ctx context.Context, sdkTx sdk.Tx, req tx.RequestSimulateTx) (tx.ResponseSimulateTx, error) { + if err := cgts.consumeTxSizeGas(ctx, sdkTx, req.TxBytes, true); err != nil { + return tx.ResponseSimulateTx{}, err + } + + if err := cgts.simulateSigGasCost(ctx, sdkTx); err != nil { + return tx.ResponseSimulateTx{}, err + } + + return cgts.next.SimulateTx(ctx, sdkTx, req) +} + +// isIncompleteSignature tests whether SignatureData is fully filled in for simulation purposes +func isIncompleteSignature(data signing.SignatureData) bool { + if data == nil { + return true + } + + switch data := data.(type) { + case *signing.SingleSignatureData: + return len(data.Signature) == 0 + case *signing.MultiSignatureData: + if len(data.Signatures) == 0 { + return true + } + for _, s := range data.Signatures { + if isIncompleteSignature(s) { + return true + } + } + } + + return false +} diff --git a/x/auth/spec/01_concepts.md b/x/auth/spec/01_concepts.md index 9f8c9b8d8f2d..f723751f06a2 100644 --- a/x/auth/spec/01_concepts.md +++ b/x/auth/spec/01_concepts.md @@ -4,6 +4,16 @@ order: 1 # Concepts +<<<<<<< HEAD +======= +**Note:** The auth module is different from the [authz module](../modules/authz/). + +The differences are: + +* `auth` - authentication of accounts and transactions for Cosmos SDK applications and is responsible for specifying the base transaction and account types. +* `authz` - authorization for accounts to perform actions on behalf of other accounts and enables a granter to grant authorizations to a grantee that allows the grantee to execute messages on behalf of the granter. + +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) ## Gas & Fees Fees serve two purposes for an operator of the network. diff --git a/x/auth/spec/05_vesting.md b/x/auth/spec/05_vesting.md index 214db97d15e6..399a6d3be02a 100644 --- a/x/auth/spec/05_vesting.md +++ b/x/auth/spec/05_vesting.md @@ -614,3 +614,8 @@ linearly over time. all coins at a given time. - PeriodicVestingAccount: A vesting account implementation that vests coins according to a custom vesting schedule. +<<<<<<< HEAD +======= +- PermanentLockedAccount: It does not ever release coins, locking them indefinitely. +Coins in this account can still be used for delegating and for governance votes even while locked. +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) diff --git a/x/auth/spec/07_client.md b/x/auth/spec/07_client.md new file mode 100644 index 000000000000..bcfdc6f6faee --- /dev/null +++ b/x/auth/spec/07_client.md @@ -0,0 +1,421 @@ + + +# Client + +# Auth + +## CLI + +A user can query and interact with the `auth` module using the CLI. + +### Query + +The `query` commands allow users to query `auth` state. + +```bash +simd query auth --help +``` + +#### account + +The `account` command allow users to query for an account by it's address. + +```bash +simd query auth account [address] [flags] +``` + +Example: + +```bash +simd query auth account cosmos1... +``` + +Example Output: + +```bash +'@type': /cosmos.auth.v1beta1.BaseAccount +account_number: "0" +address: cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2 +pub_key: + '@type': /cosmos.crypto.secp256k1.PubKey + key: ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD +sequence: "1" +``` + +#### accounts + +The `accounts` command allow users to query all the available accounts. + +```bash +simd query auth accounts [flags] +``` + +Example: + +```bash +simd query auth accounts +``` + +Example Output: + +```bash +accounts: +- '@type': /cosmos.auth.v1beta1.BaseAccount + account_number: "0" + address: cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2 + pub_key: + '@type': /cosmos.crypto.secp256k1.PubKey + key: ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD + sequence: "1" +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "8" + address: cosmos1yl6hdjhmkf37639730gffanpzndzdpmhwlkfhr + pub_key: null + sequence: "0" + name: transfer + permissions: + - minter + - burner +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "4" + address: cosmos1fl48vsnmsdzcv85q5d2q4z5ajdha8yu34mf0eh + pub_key: null + sequence: "0" + name: bonded_tokens_pool + permissions: + - burner + - staking +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "5" + address: cosmos1tygms3xhhs3yv487phx3dw4a95jn7t7lpm470r + pub_key: null + sequence: "0" + name: not_bonded_tokens_pool + permissions: + - burner + - staking +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "6" + address: cosmos10d07y265gmmuvt4z0w9aw880jnsr700j6zn9kn + pub_key: null + sequence: "0" + name: gov + permissions: + - burner +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "3" + address: cosmos1jv65s3grqf6v6jl3dp4t6c9t9rk99cd88lyufl + pub_key: null + sequence: "0" + name: distribution + permissions: [] +- '@type': /cosmos.auth.v1beta1.BaseAccount + account_number: "1" + address: cosmos147k3r7v2tvwqhcmaxcfql7j8rmkrlsemxshd3j + pub_key: null + sequence: "0" +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "7" + address: cosmos1m3h30wlvsf8llruxtpukdvsy0km2kum8g38c8q + pub_key: null + sequence: "0" + name: mint + permissions: + - minter +- '@type': /cosmos.auth.v1beta1.ModuleAccount + base_account: + account_number: "2" + address: cosmos17xpfvakm2amg962yls6f84z3kell8c5lserqta + pub_key: null + sequence: "0" + name: fee_collector + permissions: [] +pagination: + next_key: null + total: "0" +``` + +#### params + +The `params` command allow users to query the current auth parameters. + +```bash +simd query auth params [flags] +``` + +Example: + +```bash +simd query auth params +``` + +Example Output: + +```bash +max_memo_characters: "256" +sig_verify_cost_ed25519: "590" +sig_verify_cost_secp256k1: "1000" +tx_sig_limit: "7" +tx_size_cost_per_byte: "10" +``` + +## gRPC + +A user can query the `auth` module using gRPC endpoints. + +### Account + +The `account` endpoint allow users to query for an account by it's address. + +```bash +cosmos.auth.v1beta1.Query/Account +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"address":"cosmos1.."}' \ + localhost:9090 \ + cosmos.auth.v1beta1.Query/Account +``` + +Example Output: + +```bash +{ + "account":{ + "@type":"/cosmos.auth.v1beta1.BaseAccount", + "address":"cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2", + "pubKey":{ + "@type":"/cosmos.crypto.secp256k1.PubKey", + "key":"ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD" + }, + "sequence":"1" + } +} +``` + +### Accounts + +The `accounts` endpoint allow users to query all the available accounts. + +```bash +cosmos.auth.v1beta1.Query/Accounts +``` + +Example: + +```bash +grpcurl -plaintext \ + localhost:9090 \ + cosmos.auth.v1beta1.Query/Accounts +``` + +Example Output: + +```bash +{ + "accounts":[ + { + "@type":"/cosmos.auth.v1beta1.BaseAccount", + "address":"cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2", + "pubKey":{ + "@type":"/cosmos.crypto.secp256k1.PubKey", + "key":"ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD" + }, + "sequence":"1" + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1yl6hdjhmkf37639730gffanpzndzdpmhwlkfhr", + "accountNumber":"8" + }, + "name":"transfer", + "permissions":[ + "minter", + "burner" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1fl48vsnmsdzcv85q5d2q4z5ajdha8yu34mf0eh", + "accountNumber":"4" + }, + "name":"bonded_tokens_pool", + "permissions":[ + "burner", + "staking" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1tygms3xhhs3yv487phx3dw4a95jn7t7lpm470r", + "accountNumber":"5" + }, + "name":"not_bonded_tokens_pool", + "permissions":[ + "burner", + "staking" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos10d07y265gmmuvt4z0w9aw880jnsr700j6zn9kn", + "accountNumber":"6" + }, + "name":"gov", + "permissions":[ + "burner" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1jv65s3grqf6v6jl3dp4t6c9t9rk99cd88lyufl", + "accountNumber":"3" + }, + "name":"distribution" + }, + { + "@type":"/cosmos.auth.v1beta1.BaseAccount", + "accountNumber":"1", + "address":"cosmos147k3r7v2tvwqhcmaxcfql7j8rmkrlsemxshd3j" + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos1m3h30wlvsf8llruxtpukdvsy0km2kum8g38c8q", + "accountNumber":"7" + }, + "name":"mint", + "permissions":[ + "minter" + ] + }, + { + "@type":"/cosmos.auth.v1beta1.ModuleAccount", + "baseAccount":{ + "address":"cosmos17xpfvakm2amg962yls6f84z3kell8c5lserqta", + "accountNumber":"2" + }, + "name":"fee_collector" + } + ], + "pagination":{ + "total":"9" + } +} +``` + +### Params + +The `params` endpoint allow users to query the current auth parameters. + +```bash +cosmos.auth.v1beta1.Query/Params +``` + +Example: + +```bash +grpcurl -plaintext \ + localhost:9090 \ + cosmos.auth.v1beta1.Query/Params +``` + +Example Output: + +```bash +{ + "params": { + "maxMemoCharacters": "256", + "txSigLimit": "7", + "txSizeCostPerByte": "10", + "sigVerifyCostEd25519": "590", + "sigVerifyCostSecp256k1": "1000" + } +} +``` + +## REST + +A user can query the `auth` module using REST endpoints. + +### Account + +The `account` endpoint allow users to query for an account by it's address. + +```bash +/cosmos/auth/v1beta1/account?address={address} +``` + +### Accounts + +The `accounts` endpoint allow users to query all the available accounts. + +```bash +/cosmos/auth/v1beta1/accounts +``` + +### Params + +The `params` endpoint allow users to query the current auth parameters. + +```bash +/cosmos/auth/v1beta1/params +``` + +# Vesting + +## CLI + +A user can query and interact with the `vesting` module using the CLI. + +### Transactions + +The `tx` commands allow users to interact with the `vesting` module. + +```bash +simd tx vesting --help +``` + +#### create-periodic-vesting-account + +The `create-periodic-vesting-account` command creates a new vesting account funded with an allocation of tokens, where a sequence of coins and period length in seconds. Periods are sequential, in that the duration of of a period only starts at the end of the previous period. The duration of the first period starts upon account creation. + +```bash +simd tx vesting create-periodic-vesting-account [to_address] [periods_json_file] [flags] +``` + +Example: + +```bash +simd tx vesting create-periodic-vesting-account cosmos1.. periods.json +``` + +#### create-vesting-account + +The `create-vesting-account` command creates a new vesting account funded with an allocation of tokens. The account can either be a delayed or continuous vesting account, which is determined by the '--delayed' flag. All vesting accouts created will have their start time set by the committed block's time. The end_time must be provided as a UNIX epoch timestamp. + +```bash +simd tx vesting create-vesting-account [to_address] [amount] [end_time] [flags] +``` + +Example: + +```bash +simd tx vesting create-vesting-account cosmos1.. 100stake 2592000 +``` diff --git a/x/authz/spec/05_client.md b/x/authz/spec/05_client.md new file mode 100644 index 000000000000..f2ca6cc94690 --- /dev/null +++ b/x/authz/spec/05_client.md @@ -0,0 +1,172 @@ + + +# Client + +## CLI + +A user can query and interact with the `authz` module using the CLI. + +### Query + +The `query` commands allow users to query `authz` state. + +```bash +simd query authz --help +``` + +#### grants + +The `grants` command allows users to query grants for a granter-grantee pair. If the message type URL is set, it selects grants only for that message type. + +```bash +simd query authz grants [granter-addr] [grantee-addr] [msg-type-url]? [flags] +``` + +Example: + +```bash +simd query authz grants cosmos1.. cosmos1.. /cosmos.bank.v1beta1.MsgSend +``` + +Example Output: + +```bash +grants: +- authorization: + '@type': /cosmos.bank.v1beta1.SendAuthorization + spend_limit: + - amount: "100" + denom: stake + expiration: "2022-01-01T00:00:00Z" +pagination: null +``` + +### Transactions + +The `tx` commands allow users to interact with the `authz` module. + +```bash +simd tx authz --help +``` + +#### exec + +The `exec` command allows a grantee to execute a transaction on behalf of granter. + +```bash + simd tx authz exec [tx-json-file] --from [grantee] [flags] +``` + +Example: + +```bash +simd tx authz exec tx.json --from=cosmos1.. +``` + +#### grant + +The `grant` command allows a granter to grant an authorization to a grantee. + +```bash +simd tx authz grant --from [flags] +``` + +Example: + +```bash +simd tx authz grant cosmos1.. send --spend-limit=100stake --from=cosmos1.. +``` + +#### revoke + +The `revoke` command allows a granter to revoke an authorization from a grantee. + +```bash +simd tx authz revoke [grantee] [msg-type-url] --from=[granter] [flags] +``` + +Example: + +```bash +simd tx authz revoke cosmos1.. /cosmos.bank.v1beta1.MsgSend --from=cosmos1.. +``` + +## gRPC + +A user can query the `authz` module using gRPC endpoints. + +### Grants + +The `Grants` endpoint allows users to query grants for a granter-grantee pair. If the message type URL is set, it selects grants only for that message type. + +```bash +cosmos.authz.v1beta1.Query/Grants +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"granter":"cosmos1..","grantee":"cosmos1..","msg_type_url":"/cosmos.bank.v1beta1.MsgSend"}' \ + localhost:9090 \ + cosmos.authz.v1beta1.Query/Grants +``` + +Example Output: + +```bash +{ + "grants": [ + { + "authorization": { + "@type": "/cosmos.bank.v1beta1.SendAuthorization", + "spendLimit": [ + { + "denom":"stake", + "amount":"100" + } + ] + }, + "expiration": "2022-01-01T00:00:00Z" + } + ] +} +``` + +## REST + +A user can query the `authz` module using REST endpoints. + +```bash +/cosmos/authz/v1beta1/grants +``` + +Example: + +```bash +curl "localhost:1317/cosmos/authz/v1beta1/grants?granter=cosmos1..&grantee=cosmos1..&msg_type_url=/cosmos.bank.v1beta1.MsgSend" +``` + +Example Output: + +```bash +{ + "grants": [ + { + "authorization": { + "@type": "/cosmos.bank.v1beta1.SendAuthorization", + "spend_limit": [ + { + "denom": "stake", + "amount": "100" + } + ] + }, + "expiration": "2022-01-01T00:00:00Z" + } + ], + "pagination": null +} +``` diff --git a/x/bank/spec/README.md b/x/bank/spec/README.md index 9a1a0afb6edc..dd7f8df8aba2 100644 --- a/x/bank/spec/README.md +++ b/x/bank/spec/README.md @@ -100,3 +100,9 @@ The available permissions are: 4. **[Events](04_events.md)** - [Handlers](04_events.md#handlers) 5. **[Parameters](05_params.md)** +<<<<<<< HEAD +======= +6. **[Client](06_client.md)** + - [CLI](06_client.md#cli) + - [gRPC](06_client.md#grpc) +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) diff --git a/x/crisis/spec/05_client.md b/x/crisis/spec/05_client.md new file mode 100644 index 000000000000..5f95955a65eb --- /dev/null +++ b/x/crisis/spec/05_client.md @@ -0,0 +1,31 @@ + + +# Client + +## CLI + +A user can query and interact with the `crisis` module using the CLI. + +### Transactions + +The `tx` commands allow users to interact with the `crisis` module. + +```bash +simd tx crisis --help +``` + +#### invariant-broken + +The `invariant-broken` command submits proof when an invariant was broken to halt the chain + +```bash +simd tx crisis invariant-broken [module-name] [invariant-route] [flags] +``` + +Example: + +```bash +simd tx crisis invariant-broken bank total-supply --from=[keyname or address] +``` diff --git a/x/distribution/legacy/v043/helpers.go b/x/distribution/legacy/v043/helpers.go index 141863beb946..58c6d741ce04 100644 --- a/x/distribution/legacy/v043/helpers.go +++ b/x/distribution/legacy/v043/helpers.go @@ -19,7 +19,7 @@ func MigratePrefixAddress(store sdk.KVStore, prefixBz []byte) { for ; oldStoreIter.Valid(); oldStoreIter.Next() { addr := oldStoreIter.Key() - var newStoreKey []byte = prefixBz + var newStoreKey = prefixBz newStoreKey = append(newStoreKey, address.MustLengthPrefix(addr)...) // Set new key on store. Values don't change. diff --git a/x/distribution/spec/README.md b/x/distribution/spec/README.md index 868b425901b1..ac641ab84103 100644 --- a/x/distribution/spec/README.md +++ b/x/distribution/spec/README.md @@ -101,3 +101,9 @@ to set up a script to periodically withdraw and rebond rewards. - [BeginBlocker](06_events.md#beginblocker) - [Handlers](06_events.md#handlers) 7. **[Parameters](07_params.md)** +<<<<<<< HEAD +======= +8. **[Parameters](07_params.md)** + - [CLI](08_client.md#cli) + - [gRPC](08_client.md#grpc) +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) diff --git a/x/epoching/keeper/keeper.go b/x/epoching/keeper/keeper.go new file mode 100644 index 000000000000..f6869f50e8f6 --- /dev/null +++ b/x/epoching/keeper/keeper.go @@ -0,0 +1,192 @@ +package keeper + +import ( + "time" + + "github.com/cosmos/cosmos-sdk/codec" + codectypes "github.com/cosmos/cosmos-sdk/codec/types" + storetypes "github.com/cosmos/cosmos-sdk/store/types" + sdk "github.com/cosmos/cosmos-sdk/types" + db "github.com/tendermint/tm-db" +) + +const ( + DefaultEpochActionID = 1 + DefaultEpochNumber = 0 +) + +var ( + NextEpochActionID = []byte{0x11} + EpochNumberID = []byte{0x12} + EpochActionQueuePrefix = []byte{0x13} // prefix for the epoch +) + +// Keeper of the store +type Keeper struct { + storeKey storetypes.StoreKey + cdc codec.BinaryCodec + // Used to calculate the estimated next epoch time. + // This is local to every node + // TODO: remove in favor of consensus param when its added + commitTimeout time.Duration +} + +// NewKeeper creates a epoch queue manager +func NewKeeper(cdc codec.BinaryCodec, key storetypes.StoreKey, commitTimeout time.Duration) Keeper { + return Keeper{ + storeKey: key, + cdc: cdc, + commitTimeout: commitTimeout, + } +} + +// GetNewActionID returns ID to be used for next epoch +func (k Keeper) GetNewActionID(ctx sdk.Context) uint64 { + store := ctx.KVStore(k.storeKey) + + bz := store.Get(NextEpochActionID) + if bz == nil { + // return default action ID to 1 + return DefaultEpochActionID + } + id := sdk.BigEndianToUint64(bz) + + // increment next action ID + store.Set(NextEpochActionID, sdk.Uint64ToBigEndian(id+1)) + + return id +} + +// ActionStoreKey returns action store key from ID +func ActionStoreKey(epochNumber int64, actionID uint64) []byte { + return append(EpochActionQueuePrefix, byte(epochNumber), byte(actionID)) +} + +// QueueMsgForEpoch save the actions that need to be executed on next epoch +func (k Keeper) QueueMsgForEpoch(ctx sdk.Context, epochNumber int64, msg sdk.Msg) { + store := ctx.KVStore(k.storeKey) + + bz, err := k.cdc.MarshalInterface(msg) + if err != nil { + panic(err) + } + + actionID := k.GetNewActionID(ctx) + store.Set(ActionStoreKey(epochNumber, actionID), bz) +} + +// RestoreEpochAction restore the actions that need to be executed on next epoch +func (k Keeper) RestoreEpochAction(ctx sdk.Context, epochNumber int64, action *codectypes.Any) { + store := ctx.KVStore(k.storeKey) + + // reference from TestMarshalAny(t *testing.T) + bz, err := k.cdc.MarshalInterface(action) + if err != nil { + panic(err) + } + + actionID := k.GetNewActionID(ctx) + store.Set(ActionStoreKey(epochNumber, actionID), bz) +} + +// GetEpochMsg gets a msg by ID +func (k Keeper) GetEpochMsg(ctx sdk.Context, epochNumber int64, actionID uint64) sdk.Msg { + store := ctx.KVStore(k.storeKey) + + bz := store.Get(ActionStoreKey(epochNumber, actionID)) + if bz == nil { + return nil + } + + var action sdk.Msg + k.cdc.UnmarshalInterface(bz, &action) + + return action +} + +// GetEpochActions get all actions +func (k Keeper) GetEpochActions(ctx sdk.Context) []sdk.Msg { + actions := []sdk.Msg{} + iterator := k.GetEpochActionsIterator(ctx) + defer iterator.Close() + + for ; iterator.Valid(); iterator.Next() { + var action sdk.Msg + bz := iterator.Value() + k.cdc.UnmarshalInterface(bz, &action) + actions = append(actions, action) + } + + return actions +} + +// GetEpochActionsIterator returns iterator for EpochActions +func (k Keeper) GetEpochActionsIterator(ctx sdk.Context) db.Iterator { + return sdk.KVStorePrefixIterator(ctx.KVStore(k.storeKey), EpochActionQueuePrefix) +} + +// DequeueEpochActions dequeue all the actions store on epoch +func (k Keeper) DequeueEpochActions(ctx sdk.Context) { + store := ctx.KVStore(k.storeKey) + iterator := sdk.KVStorePrefixIterator(store, EpochActionQueuePrefix) + defer iterator.Close() + + for ; iterator.Valid(); iterator.Next() { + key := iterator.Key() + store.Delete(key) + } +} + +// DeleteByKey delete item by key +func (k Keeper) DeleteByKey(ctx sdk.Context, key []byte) { + store := ctx.KVStore(k.storeKey) + store.Delete(key) +} + +// GetEpochActionByIterator get action by iterator +func (k Keeper) GetEpochActionByIterator(iterator db.Iterator) sdk.Msg { + bz := iterator.Value() + + var action sdk.Msg + k.cdc.UnmarshalInterface(bz, &action) + + return action +} + +// SetEpochNumber set epoch number +func (k Keeper) SetEpochNumber(ctx sdk.Context, epochNumber int64) { + store := ctx.KVStore(k.storeKey) + store.Set(EpochNumberID, sdk.Uint64ToBigEndian(uint64(epochNumber))) +} + +// GetEpochNumber fetches epoch number +func (k Keeper) GetEpochNumber(ctx sdk.Context) int64 { + store := ctx.KVStore(k.storeKey) + + bz := store.Get(EpochNumberID) + if bz == nil { + return DefaultEpochNumber + } + + return int64(sdk.BigEndianToUint64(bz)) +} + +// IncreaseEpochNumber increases epoch number +func (k Keeper) IncreaseEpochNumber(ctx sdk.Context) { + epochNumber := k.GetEpochNumber(ctx) + k.SetEpochNumber(ctx, epochNumber+1) +} + +// GetNextEpochHeight returns next epoch block height +func (k Keeper) GetNextEpochHeight(ctx sdk.Context, epochInterval int64) int64 { + currentHeight := ctx.BlockHeight() + return currentHeight + (epochInterval - currentHeight%epochInterval) +} + +// GetNextEpochTime returns estimated next epoch time +func (k Keeper) GetNextEpochTime(ctx sdk.Context, epochInterval int64) time.Time { + currentTime := ctx.BlockTime() + currentHeight := ctx.BlockHeight() + + return currentTime.Add(k.commitTimeout * time.Duration(k.GetNextEpochHeight(ctx, epochInterval)-currentHeight)) +} diff --git a/x/epoching/spec/03_to_improve.md b/x/epoching/spec/03_to_improve.md new file mode 100644 index 000000000000..5ee5bd2ad0d5 --- /dev/null +++ b/x/epoching/spec/03_to_improve.md @@ -0,0 +1,44 @@ + + +# Changes to make + +## Validator self-unbonding (which exceed minimum self delegation) could be required to start instantly + +Cases that trigger unbonding process + +- Validator undelegate can unbond more tokens than his minimum_self_delegation and it will automatically turn the validator into unbonding +In this case, unbonding should start instantly. +- Validator miss blocks and get slashed +- Validator get slashed for double sign + +**Note:** When a validator begins the unbonding process, it could be required to turn the validator into unbonding state instantly. + This is different than a specific delegator beginning to unbond. A validator beginning to unbond means that it's not in the set any more. + A delegator unbonding from a validator removes their delegation from the validator. + +## Pending development + +```go +// Changes to make +// — Implement correct next epoch time calculation +// — For validator self undelegation, it could be required to do start on end blocker +// — Implement TODOs on the PR #46 +// Implement CLI commands for querying +// — BufferedValidators +// — BufferedMsgCreateValidatorQueue, BufferedMsgEditValidatorQueue +// — BufferedMsgUnjailQueue, BufferedMsgDelegateQueue, BufferedMsgRedelegationQueue, BufferedMsgUndelegateQueue +// Write epoch related tests with new scenarios +// — Simulation test is important for finding bugs [Ask Dev for questions) +// — Can easily add a simulator check to make sure all delegation amounts in queue add up to the same amount that’s in the EpochUnbondedPool +// — I’d like it added as an invariant test for the simulator +// — the simulator should check that the sum of all the queued delegations always equals the amount kept track in the data +// — Staking/Slashing/Distribution module params are being modified by governance based on vote result instantly. We should test the effect. +// — — Should test to see what would happen if max_validators is changed though, in the middle of an epoch +// — we should define some new invariants that help check that everything is working smoothly with these new changes for 3 modules e.g. https://github.com/cosmos/cosmos-sdk/blob/master/x/staking/keeper/invariants.go +// — — Within Epoch, ValidationPower = ValidationPower - SlashAmount +// — — When epoch actions queue is empty, EpochDelegationPool balance should be zero +// — we should count all the delegation changes that happen during the epoch, and then make sure that the resulting change at the end of the epoch is actually correct +// — If the validator that I delegated to double signs at block 16, I should still get slashed instantly because even though I asked to unbond at 14, they still used my power at block 16, I should only be not liable for slashes once my power is stopped being used +// — On the converse of this, I should still be getting rewards while my power is being used. I shouldn’t stop receiving rewards until block 20 +``` diff --git a/x/evidence/spec/07_client.md b/x/evidence/spec/07_client.md new file mode 100644 index 000000000000..52a4b34f70fe --- /dev/null +++ b/x/evidence/spec/07_client.md @@ -0,0 +1,188 @@ +# Client + +## CLI + +A user can query and interact with the `evidence` module using the CLI. + +### Query + +The `query` commands allows users to query `evidence` state. + +```bash +simd query evidence --help +``` + +### evidence + +The `evidence` command allows users to list all evidence or evidence by hash. + +Usage: + +```bash +simd query evidence [flags] +``` + +To query evidence by hash + +Example: + +```bash +simd query evidence "DF0C23E8634E480F84B9D5674A7CDC9816466DEC28A3358F73260F68D28D7660" +``` + +Example Output: + +```bash +evidence: + consensus_address: cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h + height: 11 + power: 100 + time: "2021-10-20T16:08:38.194017624Z" +``` + +To get all evidence + +Example: + +```bash +simd query evidence +``` + +Example Output: + +```bash +evidence: + consensus_address: cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h + height: 11 + power: 100 + time: "2021-10-20T16:08:38.194017624Z" +pagination: + next_key: null + total: "1" +``` + +## REST + +A user can query the `evidence` module using REST endpoints. + +### Evidence + +Get evidence by hash + +```bash +/cosmos/evidence/v1beta1/evidence/{evidence_hash} +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/evidence/v1beta1/evidence/DF0C23E8634E480F84B9D5674A7CDC9816466DEC28A3358F73260F68D28D7660" +``` + +Example Output: + +```bash +{ + "evidence": { + "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", + "height": "11", + "power": "100", + "time": "2021-10-20T16:08:38.194017624Z" + } +} +``` + +### All evidence + +Get all evidence + +```bash +/cosmos/evidence/v1beta1/evidence +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/evidence/v1beta1/evidence" +``` + +Example Output: + +```bash +{ + "evidence": [ + { + "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", + "height": "11", + "power": "100", + "time": "2021-10-20T16:08:38.194017624Z" + } + ], + "pagination": { + "total": "1" + } +} +``` + +## gRPC + +A user can query the `evidence` module using gRPC endpoints. + +### Evidence + +Get evidence by hash + +```bash +cosmos.evidence.v1beta1.Query/Evidence +``` + +Example: + +```bash +grpcurl -plaintext -d '{"evidence_hash":"DF0C23E8634E480F84B9D5674A7CDC9816466DEC28A3358F73260F68D28D7660"}' localhost:9090 cosmos.evidence.v1beta1.Query/Evidence +``` + +Example Output: + +```bash +{ + "evidence": { + "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", + "height": "11", + "power": "100", + "time": "2021-10-20T16:08:38.194017624Z" + } +} +``` + +### All evidence + +Get all evidence + +```bash +cosmos.evidence.v1beta1.Query/AllEvidence +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.evidence.v1beta1.Query/AllEvidence +``` + +Example Output: + +```bash +{ + "evidence": [ + { + "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h", + "height": "11", + "power": "100", + "time": "2021-10-20T16:08:38.194017624Z" + } + ], + "pagination": { + "total": "1" + } +} +``` diff --git a/x/feegrant/spec/README.md b/x/feegrant/spec/README.md index b1bd7febfd61..155485e0c81a 100644 --- a/x/feegrant/spec/README.md +++ b/x/feegrant/spec/README.md @@ -30,3 +30,9 @@ This module allows accounts to grant fee allowances and to use fees from their a - [MsgGrantAllowance](04_events.md#msggrantallowance) - [MsgRevokeAllowance](04_events.md#msgrevokeallowance) - [Exec fee allowance](04_events.md#exec-fee-allowance) +<<<<<<< HEAD +======= +5. **[Client](05_client.md)** + - [CLI](05_client.md#cli) + - [gRPC](05_client.md#grpc) +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) diff --git a/x/gov/spec/01_concepts.md b/x/gov/spec/01_concepts.md index 29e582990b09..36a97c0ddba6 100644 --- a/x/gov/spec/01_concepts.md +++ b/x/gov/spec/01_concepts.md @@ -66,8 +66,15 @@ Once the proposal's deposit reaches `MinDeposit`, it enters voting period. If pr When a the a proposal finalized, the coins from the deposit are either refunded or burned, according to the final tally of the proposal: +<<<<<<< HEAD - If the proposal is approved or if it's rejected but _not_ vetoed, deposits will automatically be refunded to their respective depositor (transferred from the governance `ModuleAccount`). - When the proposal is vetoed with a supermajority, deposits be burned from the governance `ModuleAccount`. +======= +- If the proposal is approved or rejected but _not_ vetoed, each deposit will be automatically refunded to its respective depositor (transferred from the governance `ModuleAccount`). +- When the proposal is vetoed with a supermajority, deposits will be burned from the governance `ModuleAccount` and the proposal information along with its deposit information will be removed from state. +- All refunded or burned deposits are removed from the state. Events are issued when burning or refunding a deposit. +- NOTE: The proposals which completed the voting period, cannot return the deposits when queried. +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) ## Vote diff --git a/x/gov/spec/07_client.md b/x/gov/spec/07_client.md new file mode 100644 index 000000000000..66cab628f2df --- /dev/null +++ b/x/gov/spec/07_client.md @@ -0,0 +1,1060 @@ + + +# Client + +## CLI + +A user can query and interact with the `gov` module using the CLI. + +### Query + +The `query` commands allow users to query `gov` state. + +```bash +simd query gov --help +``` + +#### deposit + +The `deposit` command allows users to query a deposit for a given proposal from a given depositor. + +```bash +simd query gov deposit [proposal-id] [depositer-addr] [flags] +``` + +Example: + +```bash +simd query gov deposit 1 cosmos1.. +``` + +Example Output: + +```bash +amount: +- amount: "100" + denom: stake +depositor: cosmos1.. +proposal_id: "1" +``` + +#### deposits + +The `deposits` command allows users to query all deposits for a given proposal. + +```bash +simd query gov deposits [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov deposits 1 +``` + +Example Output: + +```bash +deposits: +- amount: + - amount: "100" + denom: stake + depositor: cosmos1.. + proposal_id: "1" +pagination: + next_key: null + total: "0" +``` + +#### param + +The `param` command allows users to query a given parameter for the `gov` module. + +```bash +simd query gov param [param-type] [flags] +``` + +Example: + +```bash +simd query gov param voting +``` + +Example Output: + +```bash +voting_period: "172800000000000" +``` + +#### params + +The `params` command allows users to query all parameters for the `gov` module. + +```bash +simd query gov params [flags] +``` + +Example: + +```bash +simd query gov params +``` + +Example Output: + +```bash +deposit_params: + max_deposit_period: "172800000000000" + min_deposit: + - amount: "10000000" + denom: stake +tally_params: + quorum: "0.334000000000000000" + threshold: "0.500000000000000000" + veto_threshold: "0.334000000000000000" +voting_params: + voting_period: "172800000000000" +``` + +#### proposal + +The `proposal` command allows users to query a given proposal. + +```bash +simd query gov proposal [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov proposal 1 +``` + +Example Output: + +```bash +content: + '@type': /cosmos.gov.v1beta1.TextProposal + description: testing, testing, 1, 2, 3 + title: Test Proposal +deposit_end_time: "2021-09-17T23:36:18.254995423Z" +final_tally_result: + abstain: "0" + "no": "0" + no_with_veto: "0" + "yes": "0" +proposal_id: "1" +status: PROPOSAL_STATUS_DEPOSIT_PERIOD +submit_time: "2021-09-15T23:36:18.254995423Z" +total_deposit: +- amount: "100" + denom: stake +voting_end_time: "0001-01-01T00:00:00Z" +voting_start_time: "0001-01-01T00:00:00Z" +``` + +#### proposals + +The `proposals` command allows users to query all proposals with optional filters. + +```bash +simd query gov proposals [flags] +``` + +Example: + +```bash +simd query gov proposals +``` + +Example Output: + +```bash +pagination: + next_key: null + total: "1" +proposals: +- content: + '@type': /cosmos.gov.v1beta1.TextProposal + description: testing, testing, 1, 2, 3 + title: Test Proposal + deposit_end_time: "2021-09-17T23:36:18.254995423Z" + final_tally_result: + abstain: "0" + "no": "0" + no_with_veto: "0" + "yes": "0" + proposal_id: "1" + status: PROPOSAL_STATUS_DEPOSIT_PERIOD + submit_time: "2021-09-15T23:36:18.254995423Z" + total_deposit: + - amount: "100" + denom: stake + voting_end_time: "0001-01-01T00:00:00Z" + voting_start_time: "0001-01-01T00:00:00Z" +``` + +#### proposer + +The `proposer` command allows users to query the proposer for a given proposal. + +```bash +simd query gov proposer [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov proposer 1 +``` + +Example Output: + +```bash +proposal_id: "1" +proposer: cosmos1r0tllwu5c9dtgwg3wr28lpvf76hg85f5zmh9l2 +``` + +#### tally + +The `tally` command allows users to query the tally of a given proposal vote. + +```bash +simd query gov tally [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov tally 1 +``` + +Example Output: + +```bash +abstain: "0" +"no": "0" +no_with_veto: "0" +"yes": "1" +``` + +#### vote + +The `vote` command allows users to query a vote for a given proposal. + +```bash +simd query gov vote [proposal-id] [voter-addr] [flags] +``` + +Example: + +```bash +simd query gov vote 1 cosmos1.. +``` + +Example Output: + +```bash +option: VOTE_OPTION_YES +options: +- option: VOTE_OPTION_YES + weight: "1.000000000000000000" +proposal_id: "1" +voter: cosmos1.. +``` + +#### votes + +The `votes` command allows users to query all votes for a given proposal. + +```bash +simd query gov votes [proposal-id] [flags] +``` + +Example: + +```bash +simd query gov votes 1 +``` + +Example Output: + +```bash +pagination: + next_key: null + total: "0" +votes: +- option: VOTE_OPTION_YES + options: + - option: VOTE_OPTION_YES + weight: "1.000000000000000000" + proposal_id: "1" + voter: cosmos1r0tllwu5c9dtgwg3wr28lpvf76hg85f5zmh9l2 +``` + +### Transactions + +The `tx` commands allow users to interact with the `gov` module. + +```bash +simd tx gov --help +``` + +#### deposit + +The `deposit` command allows users to deposit tokens for a given proposal. + +```bash +simd tx gov deposit [proposal-id] [deposit] [flags] +``` + +Example: + +```bash +simd tx gov deposit 1 10000000stake --from cosmos1.. +``` + +#### submit-proposal + +The `submit-proposal` command allows users to submit a governance proposal and to optionally include an initial deposit. + +```bash +simd tx gov submit-proposal [command] [flags] +``` + +Example: + +```bash +simd tx gov submit-proposal --title="Test Proposal" --description="testing, testing, 1, 2, 3" --type="Text" --deposit="10000000stake" --from cosmos1.. +``` + +Example (`cancel-software-upgrade`): + +```bash +simd tx gov submit-proposal cancel-software-upgrade --title="Test Proposal" --description="testing, testing, 1, 2, 3" --deposit="10000000stake" --from cosmos1.. +``` + +Example (`community-pool-spend`): + +```bash +simd tx gov submit-proposal community-pool-spend proposal.json --from cosmos1.. +``` + +```json +{ + "title": "Test Proposal", + "description": "testing, testing, 1, 2, 3", + "recipient": "cosmos1..", + "amount": "10000000stake", + "deposit": "10000000stake" +} +``` + +Example (`param-change`): + +```bash +simd tx gov submit-proposal param-change proposal.json --from cosmos1.. +``` + +```json +{ + "title": "Test Proposal", + "description": "testing, testing, 1, 2, 3", + "changes": [ + { + "subspace": "staking", + "key": "MaxValidators", + "value": 100 + } + ], + "deposit": "10000000stake" +} +``` + +Example (`software-upgrade`): + +```bash +simd tx gov submit-proposal software-upgrade v2 --title="Test Proposal" --description="testing, testing, 1, 2, 3" --upgrade-height 1000000 --from cosmos1.. +``` + +#### vote + +The `vote` command allows users to submit a vote for a given governance proposal. + +```bash +simd tx gov vote [command] [flags] +``` + +Example: + +```bash +simd tx gov vote 1 yes --from cosmos1.. +``` + +#### weighted-vote + +The `weighted-vote` command allows users to submit a weighted vote for a given governance proposal. + +```bash +simd tx gov weighted-vote [proposal-id] [weighted-options] +``` + +Example: + +```bash +simd tx gov weighted-vote 1 yes=0.5,no=0.5 --from cosmos1 +``` + +## gRPC + +A user can query the `gov` module using gRPC endpoints. + +### Proposal + +The `Proposal` endpoint allows users to query a given proposal. + +```bash +cosmos.gov.v1beta1.Query/Proposal +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Proposal +``` + +Example Output: + +```bash +{ + "proposal": { + "proposalId": "1", + "content": {"@type":"/cosmos.gov.v1beta1.TextProposal","description":"testing, testing, 1, 2, 3","title":"Test Proposal"}, + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "finalTallyResult": { + "yes": "0", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + }, + "submitTime": "2021-09-16T19:40:08.712440474Z", + "depositEndTime": "2021-09-18T19:40:08.712440474Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10000000" + } + ], + "votingStartTime": "2021-09-16T19:40:08.712440474Z", + "votingEndTime": "2021-09-18T19:40:08.712440474Z" + } +} +``` + +### Proposals + +The `Proposals` endpoint allows users to query all proposals with optional filters. + +```bash +cosmos.gov.v1beta1.Query/Proposals +``` + +Example: + +```bash +grpcurl -plaintext \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Proposals +``` + +Example Output: + +```bash +{ + "proposals": [ + { + "proposalId": "1", + "content": {"@type":"/cosmos.gov.v1beta1.TextProposal","description":"testing, testing, 1, 2, 3","title":"Test Proposal"}, + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "finalTallyResult": { + "yes": "0", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + }, + "submitTime": "2021-09-16T19:40:08.712440474Z", + "depositEndTime": "2021-09-18T19:40:08.712440474Z", + "totalDeposit": [ + { + "denom": "stake", + "amount": "10000000" + } + ], + "votingStartTime": "2021-09-16T19:40:08.712440474Z", + "votingEndTime": "2021-09-18T19:40:08.712440474Z" + }, + { + "proposalId": "2", + "content": {"@type":"/cosmos.upgrade.v1beta1.CancelSoftwareUpgradeProposal","description":"Test Proposal","title":"testing, testing, 1, 2, 3"}, + "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD", + "finalTallyResult": { + "yes": "0", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + }, + "submitTime": "2021-09-17T18:26:57.866854713Z", + "depositEndTime": "2021-09-19T18:26:57.866854713Z", + "votingStartTime": "0001-01-01T00:00:00Z", + "votingEndTime": "0001-01-01T00:00:00Z" + } + ], + "pagination": { + "total": "2" + } +} +``` + +### Vote + +The `Vote` endpoint allows users to query a vote for a given proposal. + +```bash +cosmos.gov.v1beta1.Query/Vote +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1","voter":"cosmos1.."}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Vote +``` + +Example Output: + +```bash +{ + "vote": { + "proposalId": "1", + "voter": "cosmos1..", + "option": "VOTE_OPTION_YES", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1000000000000000000" + } + ] + } +} +``` + +### Votes + +The `Votes` endpoint allows users to query all votes for a given proposal. + +```bash +cosmos.gov.v1beta1.Query/Votes +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Votes +``` + +Example Output: + +```bash +{ + "votes": [ + { + "proposalId": "1", + "voter": "cosmos1..", + "option": "VOTE_OPTION_YES", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1000000000000000000" + } + ] + } + ], + "pagination": { + "total": "1" + } +} +``` + +### Params + +The `Params` endpoint allows users to query all parameters for the `gov` module. + + + +```bash +cosmos.gov.v1beta1.Query/Params +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"params_type":"voting"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Params +``` + +Example Output: + +```bash +{ + "votingParams": { + "votingPeriod": "172800s" + }, + "depositParams": { + "maxDepositPeriod": "0s" + }, + "tallyParams": { + "quorum": "MA==", + "threshold": "MA==", + "vetoThreshold": "MA==" + } +} +``` + +### Deposit + +The `Deposit` endpoint allows users to query a deposit for a given proposal from a given depositor. + +```bash +cosmos.gov.v1beta1.Query/Deposit +``` + +Example: + +```bash +grpcurl -plaintext \ + '{"proposal_id":"1","depositor":"cosmos1.."}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Deposit +``` + +Example Output: + +```bash +{ + "deposit": { + "proposalId": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } +} +``` + +### deposits + +The `Deposits` endpoint allows users to query all deposits for a given proposal. + +```bash +cosmos.gov.v1beta1.Query/Deposits +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/Deposits +``` + +Example Output: + +```bash +{ + "deposits": [ + { + "proposalId": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } + ], + "pagination": { + "total": "1" + } +} +``` + +### TallyResult + +The `TallyResult` endpoint allows users to query the tally of a given proposal. + +```bash +cosmos.gov.v1beta1.Query/TallyResult +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"proposal_id":"1"}' \ + localhost:9090 \ + cosmos.gov.v1beta1.Query/TallyResult +``` + +Example Output: + +```bash +{ + "tally": { + "yes": "1000000", + "abstain": "0", + "no": "0", + "noWithVeto": "0" + } +} +``` + +## REST + +A user can query the `gov` module using REST endpoints. + +### proposal + +The `proposals` endpoint allows users to query a given proposal. + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1 +``` + +Example Output: + +```bash +{ + "proposal": { + "proposal_id": "1", + "content": { + "@type": "/cosmos.gov.v1beta1.TextProposal", + "title": "Test Proposal", + "description": "testing, testing, 1, 2, 3" + }, + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "final_tally_result": { + "yes": "0", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + }, + "submit_time": "2021-09-16T19:40:08.712440474Z", + "deposit_end_time": "2021-09-18T19:40:08.712440474Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10000000" + } + ], + "voting_start_time": "2021-09-16T19:40:08.712440474Z", + "voting_end_time": "2021-09-18T19:40:08.712440474Z" + } +} +``` + +### proposals + +The `proposals` endpoint also allows users to query all proposals with optional filters. + +```bash +/cosmos/gov/v1beta1/proposals +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals +``` + +Example Output: + +```bash +{ + "proposals": [ + { + "proposal_id": "1", + "content": { + "@type": "/cosmos.gov.v1beta1.TextProposal", + "title": "Test Proposal", + "description": "testing, testing, 1, 2, 3" + }, + "status": "PROPOSAL_STATUS_VOTING_PERIOD", + "final_tally_result": { + "yes": "0", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + }, + "submit_time": "2021-09-16T19:40:08.712440474Z", + "deposit_end_time": "2021-09-18T19:40:08.712440474Z", + "total_deposit": [ + { + "denom": "stake", + "amount": "10000000" + } + ], + "voting_start_time": "2021-09-16T19:40:08.712440474Z", + "voting_end_time": "2021-09-18T19:40:08.712440474Z" + }, + { + "proposal_id": "2", + "content": { + "@type": "/cosmos.upgrade.v1beta1.CancelSoftwareUpgradeProposal", + "title": "Test Proposal", + "description": "testing, testing, 1, 2, 3" + }, + "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD", + "final_tally_result": { + "yes": "0", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + }, + "submit_time": "2021-09-17T18:26:57.866854713Z", + "deposit_end_time": "2021-09-19T18:26:57.866854713Z", + "total_deposit": [ + ], + "voting_start_time": "0001-01-01T00:00:00Z", + "voting_end_time": "0001-01-01T00:00:00Z" + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +### voter vote + +The `votes` endpoint allows users to query a vote for a given proposal. + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/votes/{voter} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/votes/cosmos1.. +``` + +Example Output: + +```bash +{ + "vote": { + "proposal_id": "1", + "voter": "cosmos1..", + "option": "VOTE_OPTION_YES", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ] + } +} +``` + +### votes + +The `votes` endpoint allows users to query all votes for a given proposal. + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/votes +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/votes +``` + +Example Output: + +```bash +{ + "votes": [ + { + "proposal_id": "1", + "voter": "cosmos1..", + "option": "VOTE_OPTION_YES", + "options": [ + { + "option": "VOTE_OPTION_YES", + "weight": "1.000000000000000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +### params + +The `params` endpoint allows users to query all parameters for the `gov` module. + + + +```bash +/cosmos/gov/v1beta1/params/{params_type} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/params/voting +``` + +Example Output: + +```bash +{ + "voting_params": { + "voting_period": "172800s" + }, + "deposit_params": { + "min_deposit": [ + ], + "max_deposit_period": "0s" + }, + "tally_params": { + "quorum": "0.000000000000000000", + "threshold": "0.000000000000000000", + "veto_threshold": "0.000000000000000000" + } +} +``` + +### deposits + +The `deposits` endpoint allows users to query a deposit for a given proposal from a given depositor. + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/deposits/{depositor} +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/deposits/cosmos1.. +``` + +Example Output: + +```bash +{ + "deposit": { + "proposal_id": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } +} +``` + +### proposal deposits + +The `deposits` endpoint allows users to query all deposits for a given proposal. + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/deposits +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/deposits +``` + +Example Output: + +```bash +{ + "deposits": [ + { + "proposal_id": "1", + "depositor": "cosmos1..", + "amount": [ + { + "denom": "stake", + "amount": "10000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +### tally + +The `tally` endpoint allows users to query the tally of a given proposal. + +```bash +/cosmos/gov/v1beta1/proposals/{proposal_id}/tally +``` + +Example: + +```bash +curl localhost:1317/cosmos/gov/v1beta1/proposals/1/tally +``` + +Example Output: + +```bash +{ + "tally": { + "yes": "1000000", + "abstain": "0", + "no": "0", + "no_with_veto": "0" + } +} +``` diff --git a/x/group/internal/orm/spec/01_table.md b/x/group/internal/orm/spec/01_table.md new file mode 100644 index 000000000000..7b159b482dc1 --- /dev/null +++ b/x/group/internal/orm/spec/01_table.md @@ -0,0 +1,40 @@ +# Table + +A table can be built given a `codec.ProtoMarshaler` model type, a prefix to access the underlying prefix store used to store table data as well as a `Codec` for marshalling/unmarshalling. + ++++ https://github.com/cosmos/cosmos-sdk/blob/9f78f16ae75cc42fc5fe636bde18a453ba74831f/x/group/internal/orm/table.go#L24-L30 + +In the prefix store, entities should be stored by an unique identifier called `RowID` which can be based either on an `uint64` auto-increment counter, string or dynamic size bytes. +Regular CRUD operations can be performed on a table, these methods take a `sdk.KVStore` as parameter to get the table prefix store. + +The `table` struct does not: + +- enforce uniqueness of the `RowID` +- enforce prefix uniqueness of keys, i.e. not allowing one key to be a prefix + of another +- optimize Gas usage conditions +The `table` struct is private, so that we only have custom tables built on top of it, that do satisfy these requirements. + +## AutoUInt64Table + +`AutoUInt64Table` is a table type with an auto incrementing `uint64` ID. + ++++ https://github.com/cosmos/cosmos-sdk/blob/9f78f16ae75cc42fc5fe636bde18a453ba74831f/x/group/internal/orm/auto_uint64.go#L11-L14 + +It's based on the `Sequence` struct which is a persistent unique key generator based on a counter encoded using 8 byte big endian. + +## PrimaryKeyTable + +`PrimaryKeyTable` provides simpler object style orm methods where are persisted and loaded with a reference to their unique primary key. + +The model provided for creating a `PrimaryKeyTable` should implement the `PrimaryKeyed` interface: + ++++ https://github.com/cosmos/cosmos-sdk/blob/9f78f16ae75cc42fc5fe636bde18a453ba74831f/x/group/internal/orm/primary_key.go#L28-L41 + +`PrimaryKeyFields()` method returns the list of key parts for a given object. +The primary key parts can be []byte, string, and `uint64` types. + Key parts, except the last part, follow these rules: + +- []byte is encoded with a single byte length prefix +- strings are null-terminated +- `uint64` are encoded using 8 byte big endian. diff --git a/x/mint/spec/06_client.md b/x/mint/spec/06_client.md new file mode 100644 index 000000000000..e2c366d932be --- /dev/null +++ b/x/mint/spec/06_client.md @@ -0,0 +1,224 @@ + + +# Client + +## CLI + +A user can query and interact with the `mint` module using the CLI. + +### Query + +The `query` commands allow users to query `mint` state. + +``` +simd query mint --help +``` + +#### annual-provisions + +The `annual-provisions` command allow users to query the current minting annual provisions value + +``` +simd query mint annual-provisions [flags] +``` + +Example: + +``` +simd query mint annual-provisions +``` + +Example Output: + +``` +22268504368893.612100895088410693 +``` + +#### inflation + +The `inflation` command allow users to query the current minting inflation value + +``` +simd query mint inflation [flags] +``` + +Example: + +``` +simd query mint inflation +``` + +Example Output: + +``` +0.199200302563256955 +``` + +#### params + +The `params` command allow users to query the current minting parameters + +``` +simd query mint params [flags] +``` + +Example: + +``` +blocks_per_year: "4360000" +goal_bonded: "0.670000000000000000" +inflation_max: "0.200000000000000000" +inflation_min: "0.070000000000000000" +inflation_rate_change: "0.130000000000000000" +mint_denom: stake +``` + +## gRPC + +A user can query the `mint` module using gRPC endpoints. + +### AnnualProvisions + +The `AnnualProvisions` endpoint allow users to query the current minting annual provisions value + +``` +/cosmos.mint.v1beta1.Query/AnnualProvisions +``` + +Example: + +``` +grpcurl -plaintext localhost:9090 cosmos.mint.v1beta1.Query/AnnualProvisions +``` + +Example Output: + +``` +{ + "annualProvisions": "1432452520532626265712995618" +} +``` + +### Inflation + +The `Inflation` endpoint allow users to query the current minting inflation value + +``` +/cosmos.mint.v1beta1.Query/Inflation +``` + +Example: + +``` +grpcurl -plaintext localhost:9090 cosmos.mint.v1beta1.Query/Inflation +``` + +Example Output: + +``` +{ + "inflation": "130197115720711261" +} +``` + +### Params + +The `Params` endpoint allow users to query the current minting parameters + +``` +/cosmos.mint.v1beta1.Query/Params +``` + +Example: + +``` +grpcurl -plaintext localhost:9090 cosmos.mint.v1beta1.Query/Params +``` + +Example Output: + +``` +{ + "params": { + "mintDenom": "stake", + "inflationRateChange": "130000000000000000", + "inflationMax": "200000000000000000", + "inflationMin": "70000000000000000", + "goalBonded": "670000000000000000", + "blocksPerYear": "6311520" + } +} +``` + +## REST + +A user can query the `mint` module using REST endpoints. + +### annual-provisions + +``` +/cosmos/mint/v1beta1/annual_provisions +``` + +Example: + +``` +curl "localhost:1317/cosmos/mint/v1beta1/annual_provisions" +``` + +Example Output: + +``` +{ + "annualProvisions": "1432452520532626265712995618" +} +``` + +### inflation + +``` +/cosmos/mint/v1beta1/inflation +``` + +Example: + +``` +curl "localhost:1317/cosmos/mint/v1beta1/inflation" +``` + +Example Output: + +``` +{ + "inflation": "130197115720711261" +} +``` + +### params + +``` +/cosmos/mint/v1beta1/params +``` + +Example: + +``` +curl "localhost:1317/cosmos/mint/v1beta1/params" +``` + +Example Output: + +``` +{ + "params": { + "mintDenom": "stake", + "inflationRateChange": "130000000000000000", + "inflationMax": "200000000000000000", + "inflationMin": "70000000000000000", + "goalBonded": "670000000000000000", + "blocksPerYear": "6311520" + } +} +``` diff --git a/x/slashing/spec/09_client.md b/x/slashing/spec/09_client.md new file mode 100644 index 000000000000..fd5b2030fe43 --- /dev/null +++ b/x/slashing/spec/09_client.md @@ -0,0 +1,294 @@ + + +# CLI + +A user can query and interact with the `slashing` module using the CLI. + +### Query + +The `query` commands allow users to query `slashing` state. + +```bash +simd query slashing --help +``` + +#### params + +The `params` command allows users to query genesis parameters for the slashing module. + +```bash +simd query slashing params [flags] +``` + +Example: + +```bash +simd query slashing params +``` + +Example Output: + +```bash +downtime_jail_duration: 600s +min_signed_per_window: "0.500000000000000000" +signed_blocks_window: "100" +slash_fraction_double_sign: "0.050000000000000000" +slash_fraction_downtime: "0.010000000000000000" +``` + +#### signing-info + +The `signing-info` command allows users to query signing-info of the validator using consensus public key. + +```bash +simd query slashing signing-infos [flags] +``` + +Example: + +```bash +simd query slashing signing-info '{"@type":"/cosmos.crypto.ed25519.PubKey","key":"Auxs3865HpB/EfssYOzfqNhEJjzys6jD5B6tPgC8="}' + +``` + +Example Output: + +```bash +address: cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c +index_offset: "2068" +jailed_until: "1970-01-01T00:00:00Z" +missed_blocks_counter: "0" +start_height: "0" +tombstoned: false +``` + +#### signing-infos + +The `signing-infos` command allows users to query signing infos of all validators. + +```bash +simd query slashing signing-infos [flags] +``` + +Example: + +```bash +simd query slashing signing-infos +``` + +Example Output: + +```bash +info: +- address: cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c + index_offset: "2075" + jailed_until: "1970-01-01T00:00:00Z" + missed_blocks_counter: "0" + start_height: "0" + tombstoned: false +pagination: + next_key: null + total: "0" +``` + +### Transactions + +The `tx` commands allow users to interact with the `slashing` module. + +```bash +simd tx slashing --help +``` + +#### unjail + +The `unjail` command allows users to unjail a validator previously jailed for downtime. + +```bash + simd tx slashing unjail --from mykey [flags] +``` + +Example: + +```bash +simd tx slashing unjail --from mykey +``` + +## gRPC + +A user can query the `slashing` module using gRPC endpoints. + +### Params + +The `Params` endpoint allows users to query the parameters of slashing module. + +```bash +cosmos.slashing.v1beta1.Query/Params +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/Params +``` + +Example Output: + +```bash +{ + "params": { + "signedBlocksWindow": "100", + "minSignedPerWindow": "NTAwMDAwMDAwMDAwMDAwMDAw", + "downtimeJailDuration": "600s", + "slashFractionDoubleSign": "NTAwMDAwMDAwMDAwMDAwMDA=", + "slashFractionDowntime": "MTAwMDAwMDAwMDAwMDAwMDA=" + } +} +``` + +### SigningInfo + +The SigningInfo queries the signing info of given cons address. + +```bash +cosmos.slashing.v1beta1.Query/SigningInfo +``` + +Example: + +```bash +grpcurl -plaintext -d '{"cons_address":"cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c"}' localhost:9090 cosmos.slashing.v1beta1.Query/SigningInfo +``` + +Example Output: + +```bash +{ + "valSigningInfo": { + "address": "cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c", + "indexOffset": "3493", + "jailedUntil": "1970-01-01T00:00:00Z" + } +} +``` + +### SigningInfos + +The SigningInfos queries signing info of all validators. + +```bash +cosmos.slashing.v1beta1.Query/SigningInfos +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/SigningInfos +``` + +Example Output: + +```bash +{ + "info": [ + { + "address": "cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c", + "indexOffset": "2467", + "jailedUntil": "1970-01-01T00:00:00Z" + } + ], + "pagination": { + "total": "1" + } +} +``` + +## REST + +A user can query the `slashing` module using REST endpoints. + +### Params + +```bash +/cosmos/slashing/v1beta1/params +``` + +Example: + +```bash +curl "localhost:1317/cosmos/slashing/v1beta1/params" +``` + +Example Output: + +```bash +{ + "params": { + "signed_blocks_window": "100", + "min_signed_per_window": "0.500000000000000000", + "downtime_jail_duration": "600s", + "slash_fraction_double_sign": "0.050000000000000000", + "slash_fraction_downtime": "0.010000000000000000" +} +``` + +### signing_info + +```bash +/cosmos/slashing/v1beta1/signing_infos/%s +``` + +Example: + +```bash +curl "localhost:1317/cosmos/slashing/v1beta1/signing_infos/cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c" +``` + +Example Output: + +```bash +{ + "val_signing_info": { + "address": "cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c", + "start_height": "0", + "index_offset": "4184", + "jailed_until": "1970-01-01T00:00:00Z", + "tombstoned": false, + "missed_blocks_counter": "0" + } +} +``` + +### signing_infos + +```bash +/cosmos/slashing/v1beta1/signing_infos +``` + +Example: + +```bash +curl "localhost:1317/cosmos/slashing/v1beta1/signing_infos +``` + +Example Output: + +```bash +{ + "info": [ + { + "address": "cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c", + "start_height": "0", + "index_offset": "4169", + "jailed_until": "1970-01-01T00:00:00Z", + "tombstoned": false, + "missed_blocks_counter": "0" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` diff --git a/x/slashing/spec/README.md b/x/slashing/spec/README.md index 226306562333..4fbf184be78d 100644 --- a/x/slashing/spec/README.md +++ b/x/slashing/spec/README.md @@ -43,3 +43,10 @@ This module will be used by the Cosmos Hub, the first hub in the Cosmos ecosyste 7. **[Staking Tombstone](07_tombstone.md)** - [Abstract](07_tombstone.md#abstract) 8. **[Parameters](08_params.md)** +<<<<<<< HEAD +======= +9. **[Client](09_client.md)** + - [CLI](09_client.md#cli) + - [gRPC](09_client.md#grpc) + - [REST](09_client.md#rest) +>>>>>>> 479485f95 (style: lint go and markdown (#10060)) diff --git a/x/staking/spec/09_client.md b/x/staking/spec/09_client.md new file mode 100644 index 000000000000..608705352cfc --- /dev/null +++ b/x/staking/spec/09_client.md @@ -0,0 +1,2088 @@ + + +# Client + +## CLI + +A user can query and interact with the `staking` module using the CLI. + +### Query + +The `query` commands allows users to query `staking` state. + +```bash +simd query staking --help +``` + +#### delegation + +The `delegation` command allows users to query delegations for an individual delegator on an individual validator. + +Usage: + +```bash +simd query staking delegation [delegator-addr] [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking delegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash +balance: + amount: "10000000000" + denom: stake +delegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + shares: "10000000000.000000000000000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +#### delegations + +The `delegations` command allows users to query delegations for an individual delegator on all validators. + +Usage: + +```bash +simd query staking delegations [delegator-addr] [flags] +``` + +Example: + +```bash +simd query staking delegations cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +``` + +Example Output: + +```bash +delegation_responses: +- balance: + amount: "10000000000" + denom: stake + delegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + shares: "10000000000.000000000000000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +- balance: + amount: "10000000000" + denom: stake + delegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + shares: "10000000000.000000000000000000" + validator_address: cosmosvaloper1x20lytyf6zkcrv5edpkfkn8sz578qg5sqfyqnp +pagination: + next_key: null + total: "0" +``` + +#### delegations-to + +The `delegations-to` command allows users to query delegations on an individual validator. + +Usage: + +```bash +simd query staking delegations-to [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking delegations-to cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash +- balance: + amount: "504000000" + denom: stake + delegation: + delegator_address: cosmos1q2qwwynhv8kh3lu5fkeex4awau9x8fwt45f5cp + shares: "504000000.000000000000000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +- balance: + amount: "78125000000" + denom: uixo + delegation: + delegator_address: cosmos1qvppl3479hw4clahe0kwdlfvf8uvjtcd99m2ca + shares: "78125000000.000000000000000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +pagination: + next_key: null + total: "0" +``` + +#### historical-info + +The `historical-info` command allows users to query historical information at given height. + +Usage: + +```bash +simd query staking historical-info [height] [flags] +``` + +Example: + +```bash +simd query staking historical-info 10 +``` + +Example Output: + +```bash +header: + app_hash: Lbx8cXpI868wz8sgp4qPYVrlaKjevR5WP/IjUxwp3oo= + chain_id: testnet + consensus_hash: BICRvH3cKD93v7+R1zxE2ljD34qcvIZ0Bdi389qtoi8= + data_hash: 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= + evidence_hash: 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= + height: "10" + last_block_id: + hash: RFbkpu6pWfSThXxKKl6EZVDnBSm16+U0l0xVjTX08Fk= + part_set_header: + hash: vpIvXD4rxD5GM4MXGz0Sad9I7//iVYLzZsEU4BVgWIU= + total: 1 + last_commit_hash: Ne4uXyx4QtNp4Zx89kf9UK7oG9QVbdB6e7ZwZkhy8K0= + last_results_hash: 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= + next_validators_hash: nGBgKeWBjoxeKFti00CxHsnULORgKY4LiuQwBuUrhCs= + proposer_address: mMEP2c2IRPLr99LedSRtBg9eONM= + time: "2021-10-01T06:00:49.785790894Z" + validators_hash: nGBgKeWBjoxeKFti00CxHsnULORgKY4LiuQwBuUrhCs= + version: + app: "0" + block: "11" +valset: +- commission: + commission_rates: + max_change_rate: "0.010000000000000000" + max_rate: "0.200000000000000000" + rate: "0.100000000000000000" + update_time: "2021-10-01T05:52:50.380144238Z" + consensus_pubkey: + '@type': /cosmos.crypto.ed25519.PubKey + key: Auxs3865HpB/EfssYOzfqNhEJjzys2Fo6jD5B8tPgC8= + delegator_shares: "10000000.000000000000000000" + description: + details: "" + identity: "" + moniker: myvalidator + security_contact: "" + website: "" + jailed: false + min_self_delegation: "1" + operator_address: cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc + status: BOND_STATUS_BONDED + tokens: "10000000" + unbonding_height: "0" + unbonding_time: "1970-01-01T00:00:00Z" +``` + +#### params + +The `params` command allows users to query values set as staking parameters. + +Usage: + +```bash +simd query staking params [flags] +``` + +Example: + +```bash +simd query staking params +``` + +Example Output: + +```bash +bond_denom: stake +historical_entries: 10000 +max_entries: 7 +max_validators: 50 +unbonding_time: 1814400s +``` + +#### pool + +The `pool` command allows users to query values for amounts stored in the staking pool. + +Usage: + +```bash +simd q staking pool [flags] +``` + +Example: + +```bash +simd q staking pool +``` + +Example Output: + +```bash +bonded_tokens: "10000000" +not_bonded_tokens: "0" +``` + +#### redelegation + +The `redelegation` command allows users to query a redelegation record based on delegator and a source and destination validator address. + +Usage: + +```bash +simd query staking redelegation [delegator-addr] [src-validator-addr] [dst-validator-addr] [flags] +``` + +Example: + +```bash +simd query staking redelegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash +pagination: null +redelegation_responses: +- entries: + - balance: "50000000" + redelegation_entry: + completion_time: "2021-10-24T20:33:21.960084845Z" + creation_height: 2.382847e+06 + initial_balance: "50000000" + shares_dst: "50000000.000000000000000000" + - balance: "5000000000" + redelegation_entry: + completion_time: "2021-10-25T21:33:54.446846862Z" + creation_height: 2.397271e+06 + initial_balance: "5000000000" + shares_dst: "5000000000.000000000000000000" + redelegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + entries: null + validator_dst_address: cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm + validator_src_address: cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm +``` + +#### redelegations + +The `redelegations` command allows users to query all redelegation records for an individual delegator. + +Usage: + +```bash +simd query staking redelegations [delegator-addr] [flags] +``` + +Example: + +```bash +simd query staking redelegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +``` + +Example Output: + +```bash +pagination: + next_key: null + total: "0" +redelegation_responses: +- entries: + - balance: "50000000" + redelegation_entry: + completion_time: "2021-10-24T20:33:21.960084845Z" + creation_height: 2.382847e+06 + initial_balance: "50000000" + shares_dst: "50000000.000000000000000000" + - balance: "5000000000" + redelegation_entry: + completion_time: "2021-10-25T21:33:54.446846862Z" + creation_height: 2.397271e+06 + initial_balance: "5000000000" + shares_dst: "5000000000.000000000000000000" + redelegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + entries: null + validator_dst_address: cosmosvaloper1uccl5ugxrm7vqlzwqr04pjd320d2fz0z3hc6vm + validator_src_address: cosmosvaloper1zppjyal5emta5cquje8ndkpz0rs046m7zqxrpp +- entries: + - balance: "562770000000" + redelegation_entry: + completion_time: "2021-10-25T21:42:07.336911677Z" + creation_height: 2.39735e+06 + initial_balance: "562770000000" + shares_dst: "562770000000.000000000000000000" + redelegation: + delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + entries: null + validator_dst_address: cosmosvaloper1uccl5ugxrm7vqlzwqr04pjd320d2fz0z3hc6vm + validator_src_address: cosmosvaloper1zppjyal5emta5cquje8ndkpz0rs046m7zqxrpp +``` + +#### redelegations-from + +The `redelegations-from` command allows users to query delegations that are redelegating _from_ a validator. + +Usage: + +```bash +simd query staking redelegations-from [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking redelegations-from cosmosvaloper1y4rzzrgl66eyhzt6gse2k7ej3zgwmngeleucjy +``` + +Example Output: + +```bash +pagination: + next_key: null + total: "0" +redelegation_responses: +- entries: + - balance: "50000000" + redelegation_entry: + completion_time: "2021-10-24T20:33:21.960084845Z" + creation_height: 2.382847e+06 + initial_balance: "50000000" + shares_dst: "50000000.000000000000000000" + - balance: "5000000000" + redelegation_entry: + completion_time: "2021-10-25T21:33:54.446846862Z" + creation_height: 2.397271e+06 + initial_balance: "5000000000" + shares_dst: "5000000000.000000000000000000" + redelegation: + delegator_address: cosmos1pm6e78p4pgn0da365plzl4t56pxy8hwtqp2mph + entries: null + validator_dst_address: cosmosvaloper1uccl5ugxrm7vqlzwqr04pjd320d2fz0z3hc6vm + validator_src_address: cosmosvaloper1y4rzzrgl66eyhzt6gse2k7ej3zgwmngeleucjy +- entries: + - balance: "221000000" + redelegation_entry: + completion_time: "2021-10-05T21:05:45.669420544Z" + creation_height: 2.120693e+06 + initial_balance: "221000000" + shares_dst: "221000000.000000000000000000" + redelegation: + delegator_address: cosmos1zqv8qxy2zgn4c58fz8jt8jmhs3d0attcussrf6 + entries: null + validator_dst_address: cosmosvaloper10mseqwnwtjaqfrwwp2nyrruwmjp6u5jhah4c3y + validator_src_address: cosmosvaloper1y4rzzrgl66eyhzt6gse2k7ej3zgwmngeleucjy +``` + +#### unbonding-delegation + +The `unbonding-delegation` command allows users to query unbonding delegations for an individual delegator on an individual validator. + +Usage: + +```bash +simd query staking unbonding-delegation [delegator-addr] [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking unbonding-delegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash +delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +entries: +- balance: "52000000" + completion_time: "2021-11-02T11:35:55.391594709Z" + creation_height: "55078" + initial_balance: "52000000" +validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +#### unbonding-delegations + +The `unbonding-delegations` command allows users to query all unbonding-delegations records for one delegator. + +Usage: + +```bash +simd query staking unbonding-delegations [delegator-addr] [flags] +``` + +Example: + +```bash +simd query staking unbonding-delegations cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p +``` + +Example Output: + +```bash +pagination: + next_key: null + total: "0" +unbonding_responses: +- delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p + entries: + - balance: "52000000" + completion_time: "2021-11-02T11:35:55.391594709Z" + creation_height: "55078" + initial_balance: "52000000" + validator_address: cosmosvaloper1t8ehvswxjfn3ejzkjtntcyrqwvmvuknzmvtaaa + +``` + +#### unbonding-delegations-from + +The `unbonding-delegations-from` command allows users to query delegations that are unbonding _from_ a validator. + +Usage: + +```bash +simd query staking unbonding-delegations-from [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking unbonding-delegations-from cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash +pagination: + next_key: null + total: "0" +unbonding_responses: +- delegator_address: cosmos1qqq9txnw4c77sdvzx0tkedsafl5s3vk7hn53fn + entries: + - balance: "150000000" + completion_time: "2021-11-01T21:41:13.098141574Z" + creation_height: "46823" + initial_balance: "150000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +- delegator_address: cosmos1peteje73eklqau66mr7h7rmewmt2vt99y24f5z + entries: + - balance: "24000000" + completion_time: "2021-10-31T02:57:18.192280361Z" + creation_height: "21516" + initial_balance: "24000000" + validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +#### validator + +The `validator` command allows users to query details about an individual validator. + +Usage: + +```bash +simd query staking validator [validator-addr] [flags] +``` + +Example: + +```bash +simd query staking validator cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +``` + +Example Output: + +```bash +commission: + commission_rates: + max_change_rate: "0.020000000000000000" + max_rate: "0.200000000000000000" + rate: "0.050000000000000000" + update_time: "2021-10-01T19:24:52.663191049Z" +consensus_pubkey: + '@type': /cosmos.crypto.ed25519.PubKey + key: sIiexdJdYWn27+7iUHQJDnkp63gq/rzUq1Y+fxoGjXc= +delegator_shares: "32948270000.000000000000000000" +description: + details: Witval is the validator arm from Vitwit. Vitwit is into software consulting + and services business since 2015. We are working closely with Cosmos ecosystem + since 2018. We are also building tools for the ecosystem, Aneka is our explorer + for the cosmos ecosystem. + identity: 51468B615127273A + moniker: Witval + security_contact: "" + website: "" +jailed: false +min_self_delegation: "1" +operator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj +status: BOND_STATUS_BONDED +tokens: "32948270000" +unbonding_height: "0" +unbonding_time: "1970-01-01T00:00:00Z" +``` + +#### validators + +The `validators` command allows users to query details about all validators on a network. + +Usage: + +```bash +simd query staking validators [flags] +``` + +Example: + +```bash +simd query staking validators +``` + +Example Output: + +```bash +pagination: + next_key: FPTi7TKAjN63QqZh+BaXn6gBmD5/ + total: "0" +validators: +commission: + commission_rates: + max_change_rate: "0.020000000000000000" + max_rate: "0.200000000000000000" + rate: "0.050000000000000000" + update_time: "2021-10-01T19:24:52.663191049Z" +consensus_pubkey: + '@type': /cosmos.crypto.ed25519.PubKey + key: sIiexdJdYWn27+7iUHQJDnkp63gq/rzUq1Y+fxoGjXc= +delegator_shares: "32948270000.000000000000000000" +description: + details: Witval is the validator arm from Vitwit. Vitwit is into software consulting + and services business since 2015. We are working closely with Cosmos ecosystem + since 2018. We are also building tools for the ecosystem, Aneka is our explorer + for the cosmos ecosystem. + identity: 51468B615127273A + moniker: Witval + security_contact: "" + website: "" + jailed: false + min_self_delegation: "1" + operator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj + status: BOND_STATUS_BONDED + tokens: "32948270000" + unbonding_height: "0" + unbonding_time: "1970-01-01T00:00:00Z" +- commission: + commission_rates: + max_change_rate: "0.100000000000000000" + max_rate: "0.200000000000000000" + rate: "0.050000000000000000" + update_time: "2021-10-04T18:02:21.446645619Z" + consensus_pubkey: + '@type': /cosmos.crypto.ed25519.PubKey + key: GDNpuKDmCg9GnhnsiU4fCWktuGUemjNfvpCZiqoRIYA= + delegator_shares: "559343421.000000000000000000" + description: + details: Noderunners is a professional validator in POS networks. We have a huge + node running experience, reliable soft and hardware. Our commissions are always + low, our support to delegators is always full. Stake with us and start receiving + your Cosmos rewards now! + identity: 812E82D12FEA3493 + moniker: Noderunners + security_contact: info@noderunners.biz + website: http://noderunners.biz + jailed: false + min_self_delegation: "1" + operator_address: cosmosvaloper1q5ku90atkhktze83j9xjaks2p7uruag5zp6wt7 + status: BOND_STATUS_BONDED + tokens: "559343421" + unbonding_height: "0" + unbonding_time: "1970-01-01T00:00:00Z" +``` + +### Transactions + +The `tx` commands allows users to interact with the `staking` module. + +```bash +simd tx staking --help +``` + +#### create-validator + +The command `create-validator` allows users to create new validator initialized with a self-delegation to it. + +Usage: + +```bash +simd tx staking create-validator [flags] +``` + +Example: + +```bash +simd tx staking create-validator \ + --amount=1000000stake \ + --pubkey=$(simd tendermint show-validator) \ + --moniker="my-moniker" \ + --website="https://myweb.site" \ + --details="description of your validator" \ + --chain-id="name_of_chain_id" \ + --commission-rate="0.10" \ + --commission-max-rate="0.20" \ + --commission-max-change-rate="0.01" \ + --min-self-delegation="1" \ + --gas="auto" \ + --gas-adjustment="1.2" \ + --gas-prices="0.025stake" \ + --from=mykey +``` + +#### delegate + +The command `delegate` allows users to delegate liquid tokens to a validator. + +Usage: + +```bash +simd tx staking delegate [validator-addr] [amount] [flags] +``` + +Example: + +```bash +simd tx staking delegate cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm 1000stake --from mykey +``` + +#### edit-validator + +The command `edit-validator` allows users to edit an existing validator account. + +Usage: + +```bash +simd tx staking edit-validator [flags] +``` + +Example: + +```bash +simd tx staking edit-validator --moniker "new_moniker_name" --website "new_webiste_url" --from mykey +``` + +#### redelegate + +The command `redelegate` allows users to redelegate illiquid tokens from one validator to another. + +Usage: + +```bash +simd tx staking redelegate [src-validator-addr] [dst-validator-addr] [amount] [flags] +``` + +Example: + +```bash +simd tx staking redelegate cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm 100stake --from mykey +``` + +#### unbond + +The command `unbond` allows users to unbond shares from a validator. + +Usage: + +```bash +simd tx staking unbond [validator-addr] [amount] [flags] +``` + +Example: + +```bash +simd tx staking unbond cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj 100stake --from mykey +``` + +## gRPC + +A user can query the `staking` module using gRPC endpoints. + +### Validators + +The `Validators` endpoint queries all validators that match the given status. + +```bash +cosmos.staking.v1beta1.Query/Validators +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.staking.v1beta1.Query/Validators +``` + +Example Output: + +```bash +{ + "validators": [ + { + "operatorAddress": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "consensusPubkey": {"@type":"/cosmos.crypto.ed25519.PubKey","key":"Auxs3865HpB/EfssYOzfqNhEJjzys2Fo6jD5B8tPgC8="}, + "status": "BOND_STATUS_BONDED", + "tokens": "10000000", + "delegatorShares": "10000000000000000000000000", + "description": { + "moniker": "myvalidator" + }, + "unbondingTime": "1970-01-01T00:00:00Z", + "commission": { + "commissionRates": { + "rate": "100000000000000000", + "maxRate": "200000000000000000", + "maxChangeRate": "10000000000000000" + }, + "updateTime": "2021-10-01T05:52:50.380144238Z" + }, + "minSelfDelegation": "1" + } + ], + "pagination": { + "total": "1" + } +} +``` + +### Validator + +The `Validator` endpoint queries validator information for given validator address. + +```bash +cosmos.staking.v1beta1.Query/Validator +``` + +Example: + +```bash +grpcurl -plaintext -d '{"validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/Validator +``` + +Example Output: + +```bash +{ + "validator": { + "operatorAddress": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "consensusPubkey": {"@type":"/cosmos.crypto.ed25519.PubKey","key":"Auxs3865HpB/EfssYOzfqNhEJjzys2Fo6jD5B8tPgC8="}, + "status": "BOND_STATUS_BONDED", + "tokens": "10000000", + "delegatorShares": "10000000000000000000000000", + "description": { + "moniker": "myvalidator" + }, + "unbondingTime": "1970-01-01T00:00:00Z", + "commission": { + "commissionRates": { + "rate": "100000000000000000", + "maxRate": "200000000000000000", + "maxChangeRate": "10000000000000000" + }, + "updateTime": "2021-10-01T05:52:50.380144238Z" + }, + "minSelfDelegation": "1" + } +} +``` + +### ValidatorDelegations + +The `ValidatorDelegations` endpoint queries delegate information for given validator. + +```bash +cosmos.staking.v1beta1.Query/ValidatorDelegations +``` + +Example: + +```bash +grpcurl -plaintext -d '{"validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/ValidatorDelegations +``` + +Example Output: + +```bash +{ + "delegationResponses": [ + { + "delegation": { + "delegatorAddress": "cosmos1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgy3ua5t", + "validatorAddress": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "shares": "10000000000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "10000000" + } + } + ], + "pagination": { + "total": "1" + } +} +``` + +### ValidatorUnbondingDelegations + +The `ValidatorUnbondingDelegations` endpoint queries delegate information for given validator. + +```bash +cosmos.staking.v1beta1.Query/ValidatorUnbondingDelegations +``` + +Example: + +```bash +grpcurl -plaintext -d '{"validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/ValidatorUnbondingDelegations +``` + +Example Output: + +```bash +{ + "unbonding_responses": [ + { + "delegator_address": "cosmos1z3pzzw84d6xn00pw9dy3yapqypfde7vg6965fy", + "validator_address": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "entries": [ + { + "creation_height": "25325", + "completion_time": "2021-10-31T09:24:36.797320636Z", + "initial_balance": "20000000", + "balance": "20000000" + } + ] + }, + { + "delegator_address": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", + "validator_address": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "entries": [ + { + "creation_height": "13100", + "completion_time": "2021-10-30T12:53:02.272266791Z", + "initial_balance": "1000000", + "balance": "1000000" + } + ] + }, + ], + "pagination": { + "next_key": null, + "total": "8" + } +} +``` + +### Delegation + +The `Delegation` endpoint queries delegate information for given validator delegator pair. + +```bash +cosmos.staking.v1beta1.Query/Delegation +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/Delegation +``` + +Example Output: + +```bash +{ + "delegation_response": + { + "delegation": + { + "delegator_address":"cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", + "validator_address":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "shares":"25083119936.000000000000000000" + }, + "balance": + { + "denom":"stake", + "amount":"25083119936" + } + } +} +``` + +### UnbondingDelegation + +The `UnbondingDelegation` endpoint queries unbonding information for given validator delegator. + +```bash +cosmos.staking.v1beta1.Query/UnbondingDelegation +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/UnbondingDelegation +``` + +Example Output: + +```bash +{ + "unbond": { + "delegator_address": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", + "validator_address": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc", + "entries": [ + { + "creation_height": "136984", + "completion_time": "2021-11-08T05:38:47.505593891Z", + "initial_balance": "400000000", + "balance": "400000000" + }, + { + "creation_height": "137005", + "completion_time": "2021-11-08T05:40:53.526196312Z", + "initial_balance": "385000000", + "balance": "385000000" + } + ] + } +} +``` + +### DelegatorDelegations + +The `DelegatorDelegations` endpoint queries all delegations of a given delegator address. + +```bash +cosmos.staking.v1beta1.Query/DelegatorDelegations +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/DelegatorDelegations +``` + +Example Output: + +```bash +{ + "delegation_responses": [ + {"delegation":{"delegator_address":"cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77","validator_address":"cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8","shares":"25083339023.000000000000000000"},"balance":{"denom":"stake","amount":"25083339023"}} + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +### DelegatorUnbondingDelegations + +The `DelegatorUnbondingDelegations` endpoint queries all unbonding delegations of a given delegator address. + +```bash +cosmos.staking.v1beta1.Query/DelegatorUnbondingDelegations +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/DelegatorUnbondingDelegations +``` + +Example Output: + +```bash +{ + "unbonding_responses": [ + { + "delegator_address": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", + "validator_address": "cosmosvaloper1sjllsnramtg3ewxqwwrwjxfgc4n4ef9uxyejze", + "entries": [ + { + "creation_height": "136984", + "completion_time": "2021-11-08T05:38:47.505593891Z", + "initial_balance": "400000000", + "balance": "400000000" + }, + { + "creation_height": "137005", + "completion_time": "2021-11-08T05:40:53.526196312Z", + "initial_balance": "385000000", + "balance": "385000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +### Redelegations + +The `Redelegations` endpoint queries redelegations of given address. + +```bash +cosmos.staking.v1beta1.Query/Redelegations +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1ld5p7hn43yuh8ht28gm9pfjgj2fctujp2tgwvf", "src_validator_addr" : "cosmosvaloper1j7euyj85fv2jugejrktj540emh9353ltgppc3g", "dst_validator_addr" : "cosmosvaloper1yy3tnegzmkdcm7czzcy3flw5z0zyr9vkkxrfse"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/Redelegations +``` + +Example Output: + +```bash +{ + "redelegation_responses": [ + { + "redelegation": { + "delegator_address": "cosmos1ld5p7hn43yuh8ht28gm9pfjgj2fctujp2tgwvf", + "validator_src_address": "cosmosvaloper1j7euyj85fv2jugejrktj540emh9353ltgppc3g", + "validator_dst_address": "cosmosvaloper1yy3tnegzmkdcm7czzcy3flw5z0zyr9vkkxrfse", + "entries": null + }, + "entries": [ + { + "redelegation_entry": { + "creation_height": 135932, + "completion_time": "2021-11-08T03:52:55.299147901Z", + "initial_balance": "2900000", + "shares_dst": "2900000.000000000000000000" + }, + "balance": "2900000" + } + ] + } + ], + "pagination": null +} +``` + +### DelegatorValidators + +The `DelegatorValidators` endpoint queries all validators information for given delegator. + +```bash +cosmos.staking.v1beta1.Query/DelegatorValidators +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1ld5p7hn43yuh8ht28gm9pfjgj2fctujp2tgwvf"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/DelegatorValidators +``` + +Example Output: + +```bash +{ + "validators": [ + { + "operator_address": "cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "UPwHWxH1zHJWGOa/m6JB3f5YjHMvPQPkVbDqqi+U7Uw=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "347260647559", + "delegator_shares": "347260647559.000000000000000000", + "description": { + "moniker": "BouBouNode", + "identity": "", + "website": "https://boubounode.com", + "security_contact": "", + "details": "AI-based Validator. #1 AI Validator on Game of Stakes. Fairly priced. Don't trust (humans), verify. Made with BouBou love." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.061000000000000000", + "max_rate": "0.300000000000000000", + "max_change_rate": "0.150000000000000000" + }, + "update_time": "2021-10-01T15:00:00Z" + }, + "min_self_delegation": "1" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +### DelegatorValidator + +The `DelegatorValidator` endpoint queries validator information for given delegator validator + +```bash +cosmos.staking.v1beta1.Query/DelegatorValidator +``` + +Example: + +```bash +grpcurl -plaintext \ +-d '{"delegator_addr": "cosmos1eh5mwu044gd5ntkkc2xgfg8247mgc56f3n8rr7", "validator_addr": "cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8"}' \ +localhost:9090 cosmos.staking.v1beta1.Query/DelegatorValidator +``` + +Example Output: + +```bash +{ + "validator": { + "operator_address": "cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "UPwHWxH1zHJWGOa/m6JB3f5YjHMvPQPkVbDqqi+U7Uw=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "347262754841", + "delegator_shares": "347262754841.000000000000000000", + "description": { + "moniker": "BouBouNode", + "identity": "", + "website": "https://boubounode.com", + "security_contact": "", + "details": "AI-based Validator. #1 AI Validator on Game of Stakes. Fairly priced. Don't trust (humans), verify. Made with BouBou love." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.061000000000000000", + "max_rate": "0.300000000000000000", + "max_change_rate": "0.150000000000000000" + }, + "update_time": "2021-10-01T15:00:00Z" + }, + "min_self_delegation": "1" + } +} +``` + +### HistoricalInfo + +```bash +cosmos.staking.v1beta1.Query/HistoricalInfo +``` + +Example: + +```bash +grpcurl -plaintext -d '{"height" : 1}' localhost:9090 cosmos.staking.v1beta1.Query/HistoricalInfo +``` + +Example Output: + +```bash +{ + "hist": { + "header": { + "version": { + "block": "11", + "app": "0" + }, + "chain_id": "simd-1", + "height": "140142", + "time": "2021-10-11T10:56:29.720079569Z", + "last_block_id": { + "hash": "9gri/4LLJUBFqioQ3NzZIP9/7YHR9QqaM6B2aJNQA7o=", + "part_set_header": { + "total": 1, + "hash": "Hk1+C864uQkl9+I6Zn7IurBZBKUevqlVtU7VqaZl1tc=" + } + }, + "last_commit_hash": "VxrcS27GtvGruS3I9+AlpT7udxIT1F0OrRklrVFSSKc=", + "data_hash": "80BjOrqNYUOkTnmgWyz9AQ8n7SoEmPVi4QmAe8RbQBY=", + "validators_hash": "95W49n2hw8RWpr1GPTAO5MSPi6w6Wjr3JjjS7AjpBho=", + "next_validators_hash": "95W49n2hw8RWpr1GPTAO5MSPi6w6Wjr3JjjS7AjpBho=", + "consensus_hash": "BICRvH3cKD93v7+R1zxE2ljD34qcvIZ0Bdi389qtoi8=", + "app_hash": "ZZaxnSY3E6Ex5Bvkm+RigYCK82g8SSUL53NymPITeOE=", + "last_results_hash": "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", + "evidence_hash": "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", + "proposer_address": "aH6dO428B+ItuoqPq70efFHrSMY=" + }, + "valset": [ + { + "operator_address": "cosmosvaloper196ax4vc0lwpxndu9dyhvca7jhxp70rmcqcnylw", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "/O7BtNW0pafwfvomgR4ZnfldwPXiFfJs9mHg3gwfv5Q=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "1426045203613", + "delegator_shares": "1426045203613.000000000000000000", + "description": { + "moniker": "SG-1", + "identity": "48608633F99D1B60", + "website": "https://sg-1.online", + "security_contact": "", + "details": "SG-1 - your favorite validator on Witval. We offer 100% Soft Slash protection." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.037500000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.030000000000000000" + }, + "update_time": "2021-10-01T15:00:00Z" + }, + "min_self_delegation": "1" + } + ] + } +} + +``` + +### Pool + +The `Pool` endpoint queries the pool information. + +```bash +cosmos.staking.v1beta1.Query/Pool +``` + +Example: + +```bash +grpcurl -plaintext -d localhost:9090 cosmos.staking.v1beta1.Query/Pool +``` + +Example Output: + +```bash +{ + "pool": { + "not_bonded_tokens": "369054400189", + "bonded_tokens": "15657192425623" + } +} +``` + +### Params + +The `Params` endpoint queries the pool information. + +```bash +cosmos.staking.v1beta1.Query/Params +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.staking.v1beta1.Query/Params +``` + +Example Output: + +```bash +{ + "params": { + "unbondingTime": "1814400s", + "maxValidators": 100, + "maxEntries": 7, + "historicalEntries": 10000, + "bondDenom": "stake" + } +} +``` + +## REST + +A user can query the `staking` module using REST endpoints. + +### DelegatorDelegations + +The `DelegtaorDelegations` REST endpoint queries all delegations of a given delegator address. + +```bash +/cosmos/staking/v1beta1/delegations/{delegatorAddr} +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/delegations/cosmos1vcs68xf2tnqes5tg0khr0vyevm40ff6zdxatp5" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "delegation_responses": [ + { + "delegation": { + "delegator_address": "cosmos1vcs68xf2tnqes5tg0khr0vyevm40ff6zdxatp5", + "validator_address": "cosmosvaloper1quqxfrxkycr0uzt4yk0d57tcq3zk7srm7sm6r8", + "shares": "256250000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "256250000" + } + }, + { + "delegation": { + "delegator_address": "cosmos1vcs68xf2tnqes5tg0khr0vyevm40ff6zdxatp5", + "validator_address": "cosmosvaloper194v8uwee2fvs2s8fa5k7j03ktwc87h5ym39jfv", + "shares": "255150000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "255150000" + } + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` + +### Redelegations + +The `Redelegations` REST endpoint queries redelegations of given address. + +```bash +/cosmos/staking/v1beta1/delegators/{delegatorAddr}/redelegations +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1thfntksw0d35n2tkr0k8v54fr8wxtxwxl2c56e/redelegations?srcValidatorAddr=cosmosvaloper1lzhlnpahvznwfv4jmay2tgaha5kmz5qx4cuznf&dstValidatorAddr=cosmosvaloper1vq8tw77kp8lvxq9u3c8eeln9zymn68rng8pgt4" \ +-H "accept: application/json" +``` + +Example Output: + +```bash +{ + "redelegation_responses": [ + { + "redelegation": { + "delegator_address": "cosmos1thfntksw0d35n2tkr0k8v54fr8wxtxwxl2c56e", + "validator_src_address": "cosmosvaloper1lzhlnpahvznwfv4jmay2tgaha5kmz5qx4cuznf", + "validator_dst_address": "cosmosvaloper1vq8tw77kp8lvxq9u3c8eeln9zymn68rng8pgt4", + "entries": null + }, + "entries": [ + { + "redelegation_entry": { + "creation_height": 151523, + "completion_time": "2021-11-09T06:03:25.640682116Z", + "initial_balance": "200000000", + "shares_dst": "200000000.000000000000000000" + }, + "balance": "200000000" + } + ] + } + ], + "pagination": null +} +``` + +### DelegatorUnbondingDelegations + +The `DelegatorUnbondingDelegations` REST endpoint queries all unbonding delegations of a given delegator address. + +```bash +/cosmos/staking/v1beta1/delegators/{delegatorAddr}/unbonding_delegations +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1nxv42u3lv642q0fuzu2qmrku27zgut3n3z7lll/unbonding_delegations" \ +-H "accept: application/json" +``` + +Example Output: + +```bash +{ + "unbonding_responses": [ + { + "delegator_address": "cosmos1nxv42u3lv642q0fuzu2qmrku27zgut3n3z7lll", + "validator_address": "cosmosvaloper1e7mvqlz50ch6gw4yjfemsc069wfre4qwmw53kq", + "entries": [ + { + "creation_height": "2442278", + "completion_time": "2021-10-12T10:59:03.797335857Z", + "initial_balance": "50000000000", + "balance": "50000000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +### DelegatorValidators + +The `DelegatorValidators` REST endpoint queries all validators information for given delegator address. + +```bash +/cosmos/staking/v1beta1/delegators/{delegatorAddr}/validators +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1xwazl8ftks4gn00y5x3c47auquc62ssune9ppv/validators" \ +-H "accept: application/json" +``` + +Example Output: + +```bash +{ + "validators": [ + { + "operator_address": "cosmosvaloper1xwazl8ftks4gn00y5x3c47auquc62ssuvynw64", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "5v4n3px3PkfNnKflSgepDnsMQR1hiNXnqOC11Y72/PQ=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "21592843799", + "delegator_shares": "21592843799.000000000000000000", + "description": { + "moniker": "jabbey", + "identity": "", + "website": "https://twitter.com/JoeAbbey", + "security_contact": "", + "details": "just another dad in the cosmos" + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.100000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.100000000000000000" + }, + "update_time": "2021-10-09T19:03:54.984821705Z" + }, + "min_self_delegation": "1" + } + ], + "pagination": { + "next_key": null, + "total": "1" + } +} +``` + +### DelegatorValidator + +The `DelegatorValidator` REST endpoint queries validator information for given delegator validator pair. + +```bash +/cosmos/staking/v1beta1/delegators/{delegatorAddr}/validators/{validatorAddr} +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1xwazl8ftks4gn00y5x3c47auquc62ssune9ppv/validators/cosmosvaloper1xwazl8ftks4gn00y5x3c47auquc62ssuvynw64" \ +-H "accept: application/json" +``` + +Example Output: + +```bash +{ + "validator": { + "operator_address": "cosmosvaloper1xwazl8ftks4gn00y5x3c47auquc62ssuvynw64", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "5v4n3px3PkfNnKflSgepDnsMQR1hiNXnqOC11Y72/PQ=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "21592843799", + "delegator_shares": "21592843799.000000000000000000", + "description": { + "moniker": "jabbey", + "identity": "", + "website": "https://twitter.com/JoeAbbey", + "security_contact": "", + "details": "just another dad in the cosmos" + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.100000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.100000000000000000" + }, + "update_time": "2021-10-09T19:03:54.984821705Z" + }, + "min_self_delegation": "1" + } +} +``` + +### HistoricalInfo + +The `HistoricalInfo` REST endpoint queries the historical information for given height. + +```bash +/cosmos/staking/v1beta1/historical_info/{height} +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/historical_info/153332" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "hist": { + "header": { + "version": { + "block": "11", + "app": "0" + }, + "chain_id": "cosmos-1", + "height": "153332", + "time": "2021-10-12T09:05:35.062230221Z", + "last_block_id": { + "hash": "NX8HevR5khb7H6NGKva+jVz7cyf0skF1CrcY9A0s+d8=", + "part_set_header": { + "total": 1, + "hash": "zLQ2FiKM5tooL3BInt+VVfgzjlBXfq0Hc8Iux/xrhdg=" + } + }, + "last_commit_hash": "P6IJrK8vSqU3dGEyRHnAFocoDGja0bn9euLuy09s350=", + "data_hash": "eUd+6acHWrNXYju8Js449RJ99lOYOs16KpqQl4SMrEM=", + "validators_hash": "mB4pravvMsJKgi+g8aYdSeNlt0kPjnRFyvtAQtaxcfw=", + "next_validators_hash": "mB4pravvMsJKgi+g8aYdSeNlt0kPjnRFyvtAQtaxcfw=", + "consensus_hash": "BICRvH3cKD93v7+R1zxE2ljD34qcvIZ0Bdi389qtoi8=", + "app_hash": "fuELArKRK+CptnZ8tu54h6xEleSWenHNmqC84W866fU=", + "last_results_hash": "p/BPexV4LxAzlVcPRvW+lomgXb6Yze8YLIQUo/4Kdgc=", + "evidence_hash": "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=", + "proposer_address": "G0MeY8xQx7ooOsni8KE/3R/Ib3Q=" + }, + "valset": [ + { + "operator_address": "cosmosvaloper196ax4vc0lwpxndu9dyhvca7jhxp70rmcqcnylw", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "/O7BtNW0pafwfvomgR4ZnfldwPXiFfJs9mHg3gwfv5Q=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "1416521659632", + "delegator_shares": "1416521659632.000000000000000000", + "description": { + "moniker": "SG-1", + "identity": "48608633F99D1B60", + "website": "https://sg-1.online", + "security_contact": "", + "details": "SG-1 - your favorite validator on cosmos. We offer 100% Soft Slash protection." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.037500000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.030000000000000000" + }, + "update_time": "2021-10-01T15:00:00Z" + }, + "min_self_delegation": "1" + }, + { + "operator_address": "cosmosvaloper1t8ehvswxjfn3ejzkjtntcyrqwvmvuknzmvtaaa", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "uExZyjNLtr2+FFIhNDAMcQ8+yTrqE7ygYTsI7khkA5Y=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "1348298958808", + "delegator_shares": "1348298958808.000000000000000000", + "description": { + "moniker": "Cosmostation", + "identity": "AE4C403A6E7AA1AC", + "website": "https://www.cosmostation.io", + "security_contact": "admin@stamper.network", + "details": "Cosmostation validator node. Delegate your tokens and Start Earning Staking Rewards" + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.050000000000000000", + "max_rate": "1.000000000000000000", + "max_change_rate": "0.200000000000000000" + }, + "update_time": "2021-10-01T15:06:38.821314287Z" + }, + "min_self_delegation": "1" + } + ] + } +} +``` + +### Parameters + +The `Parameters` REST endpoint queries the staking parameters. + +```bash +/cosmos/staking/v1beta1/params +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/params" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "params": { + "unbonding_time": "2419200s", + "max_validators": 100, + "max_entries": 7, + "historical_entries": 10000, + "bond_denom": "stake" + } +} +``` + +### Pool + +The `Pool` REST endpoint queries the pool information. + +```bash +/cosmos/staking/v1beta1/pool +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/pool" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "pool": { + "not_bonded_tokens": "432805737458", + "bonded_tokens": "15783637712645" + } +} +``` + +### Validators + +The `Validators` REST endpoint queries all validators that match the given status. + +```bash +/cosmos/staking/v1beta1/validators +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/validators" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "validators": [ + { + "operator_address": "cosmosvaloper1q3jsx9dpfhtyqqgetwpe5tmk8f0ms5qywje8tw", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "N7BPyek2aKuNZ0N/8YsrqSDhGZmgVaYUBuddY8pwKaE=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "383301887799", + "delegator_shares": "383301887799.000000000000000000", + "description": { + "moniker": "SmartNodes", + "identity": "D372724899D1EDC8", + "website": "https://smartnodes.co", + "security_contact": "", + "details": "Earn Rewards with Crypto Staking & Node Deployment" + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.050000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.100000000000000000" + }, + "update_time": "2021-10-01T15:51:31.596618510Z" + }, + "min_self_delegation": "1" + }, + { + "operator_address": "cosmosvaloper1q5ku90atkhktze83j9xjaks2p7uruag5zp6wt7", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "GDNpuKDmCg9GnhnsiU4fCWktuGUemjNfvpCZiqoRIYA=" + }, + "jailed": false, + "status": "BOND_STATUS_UNBONDING", + "tokens": "1017819654", + "delegator_shares": "1017819654.000000000000000000", + "description": { + "moniker": "Noderunners", + "identity": "812E82D12FEA3493", + "website": "http://noderunners.biz", + "security_contact": "info@noderunners.biz", + "details": "Noderunners is a professional validator in POS networks. We have a huge node running experience, reliable soft and hardware. Our commissions are always low, our support to delegators is always full. Stake with us and start receiving your cosmos rewards now!" + }, + "unbonding_height": "147302", + "unbonding_time": "2021-11-08T22:58:53.718662452Z", + "commission": { + "commission_rates": { + "rate": "0.050000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.100000000000000000" + }, + "update_time": "2021-10-04T18:02:21.446645619Z" + }, + "min_self_delegation": "1" + } + ], + "pagination": { + "next_key": "FONDBFkE4tEEf7yxWWKOD49jC2NK", + "total": "2" + } +} +``` + +### Validator + +The `Validator` REST endpoint queries validator information for given validator address. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr} +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q" \ +-H "accept: application/json" +``` + +Example Output: + +```bash +{ + "validator": { + "operator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "consensus_pubkey": { + "@type": "/cosmos.crypto.ed25519.PubKey", + "key": "sIiexdJdYWn27+7iUHQJDnkp63gq/rzUq1Y+fxoGjXc=" + }, + "jailed": false, + "status": "BOND_STATUS_BONDED", + "tokens": "33027900000", + "delegator_shares": "33027900000.000000000000000000", + "description": { + "moniker": "Witval", + "identity": "51468B615127273A", + "website": "", + "security_contact": "", + "details": "Witval is the validator arm from Vitwit. Vitwit is into software consulting and services business since 2015. We are working closely with Cosmos ecosystem since 2018. We are also building tools for the ecosystem, Aneka is our explorer for the cosmos ecosystem." + }, + "unbonding_height": "0", + "unbonding_time": "1970-01-01T00:00:00Z", + "commission": { + "commission_rates": { + "rate": "0.050000000000000000", + "max_rate": "0.200000000000000000", + "max_change_rate": "0.020000000000000000" + }, + "update_time": "2021-10-01T19:24:52.663191049Z" + }, + "min_self_delegation": "1" + } +} +``` + +### ValidatorDelegations + +The `ValidatorDelegations` REST endpoint queries delegate information for given validator. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr}/delegations +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q/delegations" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "delegation_responses": [ + { + "delegation": { + "delegator_address": "cosmos190g5j8aszqhvtg7cprmev8xcxs6csra7xnk3n3", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "31000000000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "31000000000" + } + }, + { + "delegation": { + "delegator_address": "cosmos1ddle9tczl87gsvmeva3c48nenyng4n56qwq4ee", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "628470000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "628470000" + } + }, + { + "delegation": { + "delegator_address": "cosmos10fdvkczl76m040smd33lh9xn9j0cf26kk4s2nw", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "838120000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "838120000" + } + }, + { + "delegation": { + "delegator_address": "cosmos1n8f5fknsv2yt7a8u6nrx30zqy7lu9jfm0t5lq8", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "500000000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "500000000" + } + }, + { + "delegation": { + "delegator_address": "cosmos16msryt3fqlxtvsy8u5ay7wv2p8mglfg9hrek2e", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "61310000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "61310000" + } + } + ], + "pagination": { + "next_key": null, + "total": "5" + } +} +``` + +### Delegation + +The `Delegation` REST endpoint queries delegate information for given validator delegator pair. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr}/delegations/{delegatorAddr} +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q/delegations/cosmos1n8f5fknsv2yt7a8u6nrx30zqy7lu9jfm0t5lq8" \ +-H "accept: application/json" +``` + +Example Output: + +```bash +{ + "delegation_response": { + "delegation": { + "delegator_address": "cosmos1n8f5fknsv2yt7a8u6nrx30zqy7lu9jfm0t5lq8", + "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q", + "shares": "500000000.000000000000000000" + }, + "balance": { + "denom": "stake", + "amount": "500000000" + } + } +} +``` + +### UnbondingDelegation + +The `UnbondingDelegation` REST endpoint queries unbonding information for given validator delegator pair. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr}/delegations/{delegatorAddr}/unbonding_delegation +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu/delegations/cosmos1ze2ye5u5k3qdlexvt2e0nn0508p04094ya0qpm/unbonding_delegation" \ +-H "accept: application/json" +``` + +Example Output: + +```bash +{ + "unbond": { + "delegator_address": "cosmos1ze2ye5u5k3qdlexvt2e0nn0508p04094ya0qpm", + "validator_address": "cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu", + "entries": [ + { + "creation_height": "153687", + "completion_time": "2021-11-09T09:41:18.352401903Z", + "initial_balance": "525111", + "balance": "525111" + } + ] + } +} +``` + +### ValidatorUnbondingDelegations + +The `ValidatorUnbondingDelegations` REST endpoint queries unbonding delegations of a validator. + +```bash +/cosmos/staking/v1beta1/validators/{validatorAddr}/unbonding_delegations +``` + +Example: + +```bash +curl -X GET \ +"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu/unbonding_delegations" \ +-H "accept: application/json" +``` + +Example Output: + +```bash +{ + "unbonding_responses": [ + { + "delegator_address": "cosmos1q9snn84jfrd9ge8t46kdcggpe58dua82vnj7uy", + "validator_address": "cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu", + "entries": [ + { + "creation_height": "90998", + "completion_time": "2021-11-05T00:14:37.005841058Z", + "initial_balance": "24000000", + "balance": "24000000" + } + ] + }, + { + "delegator_address": "cosmos1qf36e6wmq9h4twhdvs6pyq9qcaeu7ye0s3dqq2", + "validator_address": "cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu", + "entries": [ + { + "creation_height": "47478", + "completion_time": "2021-11-01T22:47:26.714116854Z", + "initial_balance": "8000000", + "balance": "8000000" + } + ] + } + ], + "pagination": { + "next_key": null, + "total": "2" + } +} +``` diff --git a/x/upgrade/spec/04_client.md b/x/upgrade/spec/04_client.md new file mode 100644 index 000000000000..da55709ee712 --- /dev/null +++ b/x/upgrade/spec/04_client.md @@ -0,0 +1,459 @@ + + +# Client + +## CLI + +A user can query and interact with the `upgrade` module using the CLI. + +### Query + +The `query` commands allow users to query `upgrade` state. + +```bash +simd query upgrade --help +``` + +#### applied + +The `applied` command allows users to query the block header for height at which a completed upgrade was applied. + +```bash +simd query upgrade applied [upgrade-name] [flags] +``` + +If upgrade-name was previously executed on the chain, this returns the header for the block at which it was applied. +This helps a client determine which binary was valid over a given range of blocks, as well as more context to understand past migrations. + +Example: + +```bash +simd query upgrade applied "test-upgrade" +``` + +Example Output: + +```bash +"block_id": { + "hash": "A769136351786B9034A5F196DC53F7E50FCEB53B48FA0786E1BFC45A0BB646B5", + "parts": { + "total": 1, + "hash": "B13CBD23011C7480E6F11BE4594EE316548648E6A666B3575409F8F16EC6939E" + } + }, + "block_size": "7213", + "header": { + "version": { + "block": "11" + }, + "chain_id": "testnet-2", + "height": "455200", + "time": "2021-04-10T04:37:57.085493838Z", + "last_block_id": { + "hash": "0E8AD9309C2DC411DF98217AF59E044A0E1CCEAE7C0338417A70338DF50F4783", + "parts": { + "total": 1, + "hash": "8FE572A48CD10BC2CBB02653CA04CA247A0F6830FF19DC972F64D339A355E77D" + } + }, + "last_commit_hash": "DE890239416A19E6164C2076B837CC1D7F7822FC214F305616725F11D2533140", + "data_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855", + "validators_hash": "A31047ADE54AE9072EE2A12FF260A8990BA4C39F903EAF5636B50D58DBA72582", + "next_validators_hash": "A31047ADE54AE9072EE2A12FF260A8990BA4C39F903EAF5636B50D58DBA72582", + "consensus_hash": "048091BC7DDC283F77BFBF91D73C44DA58C3DF8A9CBC867405D8B7F3DAADA22F", + "app_hash": "28ECC486AFC332BA6CC976706DBDE87E7D32441375E3F10FD084CD4BAF0DA021", + "last_results_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855", + "evidence_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855", + "proposer_address": "2ABC4854B1A1C5AA8403C4EA853A81ACA901CC76" + }, + "num_txs": "0" +} +``` + +#### module versions + +The `module_versions` command gets a list of module names and their respective consensus versions. + +Following the command with a specific module name will return only +that module's information. + +```bash +simd query upgrade module_versions [optional module_name] [flags] +``` + +Example: + +```bash +simd query upgrade module_versions +``` + +Example Output: + +```bash +module_versions: +- name: auth + version: "2" +- name: authz + version: "1" +- name: bank + version: "2" +- name: capability + version: "1" +- name: crisis + version: "1" +- name: distribution + version: "2" +- name: evidence + version: "1" +- name: feegrant + version: "1" +- name: genutil + version: "1" +- name: gov + version: "2" +- name: ibc + version: "2" +- name: mint + version: "1" +- name: params + version: "1" +- name: slashing + version: "2" +- name: staking + version: "2" +- name: transfer + version: "1" +- name: upgrade + version: "1" +- name: vesting + version: "1" +``` + +Example: + +```bash +regen query upgrade module_versions ibc +``` + +Example Output: + +```bash +module_versions: +- name: ibc + version: "2" +``` + +#### plan + +The `plan` command gets the currently scheduled upgrade plan, if one exists. + +```bash +regen query upgrade plan [flags] +``` + +Example: + +```bash +simd query upgrade plan +``` + +Example Output: + +```bash +height: "130" +info: "" +name: test-upgrade +time: "0001-01-01T00:00:00Z" +upgraded_client_state: null +``` + +## REST + +A user can query the `upgrade` module using REST endpoints. + +### Applied Plan + +`AppliedPlan` queries a previously applied upgrade plan by its name. + +```bash +/cosmos/upgrade/v1beta1/applied_plan/{name} +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/upgrade/v1beta1/applied_plan/v2.0-upgrade" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "height": "30" +} +``` + +### Current Plan + +`CurrentPlan` queries the current upgrade plan. + +```bash +/cosmos/upgrade/v1beta1/current_plan +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/upgrade/v1beta1/current_plan" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "plan": "v2.1-upgrade" +} +``` + +### Module versions + +`ModuleVersions` queries the list of module versions from state. + +```bash +/cosmos/upgrade/v1beta1/module_versions +``` + +Example: + +```bash +curl -X GET "http://localhost:1317/cosmos/upgrade/v1beta1/module_versions" -H "accept: application/json" +``` + +Example Output: + +```bash +{ + "module_versions": [ + { + "name": "auth", + "version": "2" + }, + { + "name": "authz", + "version": "1" + }, + { + "name": "bank", + "version": "2" + }, + { + "name": "capability", + "version": "1" + }, + { + "name": "crisis", + "version": "1" + }, + { + "name": "distribution", + "version": "2" + }, + { + "name": "evidence", + "version": "1" + }, + { + "name": "feegrant", + "version": "1" + }, + { + "name": "genutil", + "version": "1" + }, + { + "name": "gov", + "version": "2" + }, + { + "name": "ibc", + "version": "2" + }, + { + "name": "mint", + "version": "1" + }, + { + "name": "params", + "version": "1" + }, + { + "name": "slashing", + "version": "2" + }, + { + "name": "staking", + "version": "2" + }, + { + "name": "transfer", + "version": "1" + }, + { + "name": "upgrade", + "version": "1" + }, + { + "name": "vesting", + "version": "1" + } + ] +} +``` + +## gRPC + +A user can query the `upgrade` module using gRPC endpoints. + +### Applied Plan + +`AppliedPlan` queries a previously applied upgrade plan by its name. + +```bash +cosmos.upgrade.v1beta1.Query/AppliedPlan +``` + +Example: + +```bash +grpcurl -plaintext \ + -d '{"name":"v2.0-upgrade"}' \ + localhost:9090 \ + cosmos.upgrade.v1beta1.Query/AppliedPlan +``` + +Example Output: + +```bash +{ + "height": "30" +} +``` + +### Current Plan + +`CurrentPlan` queries the current upgrade plan. + +```bash +cosmos.upgrade.v1beta1.Query/CurrentPlan +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/CurrentPlan +``` + +Example Output: + +```bash +{ + "plan": "v2.1-upgrade" +} +``` + +### Module versions + +`ModuleVersions` queries the list of module versions from state. + +```bash +cosmos.upgrade.v1beta1.Query/ModuleVersions +``` + +Example: + +```bash +grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/ModuleVersions +``` + +Example Output: + +```bash +{ + "module_versions": [ + { + "name": "auth", + "version": "2" + }, + { + "name": "authz", + "version": "1" + }, + { + "name": "bank", + "version": "2" + }, + { + "name": "capability", + "version": "1" + }, + { + "name": "crisis", + "version": "1" + }, + { + "name": "distribution", + "version": "2" + }, + { + "name": "evidence", + "version": "1" + }, + { + "name": "feegrant", + "version": "1" + }, + { + "name": "genutil", + "version": "1" + }, + { + "name": "gov", + "version": "2" + }, + { + "name": "ibc", + "version": "2" + }, + { + "name": "mint", + "version": "1" + }, + { + "name": "params", + "version": "1" + }, + { + "name": "slashing", + "version": "2" + }, + { + "name": "staking", + "version": "2" + }, + { + "name": "transfer", + "version": "1" + }, + { + "name": "upgrade", + "version": "1" + }, + { + "name": "vesting", + "version": "1" + } + ] +} +``` From abe3ddf5e34740fe2aa91ecc452230850271ff9a Mon Sep 17 00:00:00 2001 From: Robert Zaremba Date: Thu, 11 Nov 2021 21:06:11 +0100 Subject: [PATCH 2/3] fix conflicts --- CONTRIBUTING.md | 269 ++++-------------- STABLE_RELEASES.md | 215 -------------- contrib/rosetta/README.md | 4 - cosmovisor/README.md | 76 ----- crypto/keyring/keyring.go | 4 - docs/DOCS_README.md | 11 - docs/architecture/adr-038-state-listening.md | 43 --- ...r-040-storage-and-smt-state-commitments.md | 90 ------ docs/migrations/rest.md | 4 - docs/run-node/rosetta.md | 4 - docs/run-node/run-node.md | 3 - go.mod | 71 ----- x/auth/ante/sigverify.go | 9 - x/auth/spec/01_concepts.md | 3 - x/auth/spec/05_vesting.md | 3 - x/bank/spec/README.md | 6 - x/distribution/spec/README.md | 8 +- x/feegrant/spec/README.md | 6 - x/gov/spec/01_concepts.md | 7 - x/slashing/spec/README.md | 3 - 20 files changed, 52 insertions(+), 787 deletions(-) delete mode 100644 STABLE_RELEASES.md diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index d141be660795..168f1419c37b 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,38 +1,27 @@ # Contributing - [Contributing](#contributing) + - [Dev Calls](#dev-calls) - [Architecture Decision Records (ADR)](#architecture-decision-records-adr) - - [Pull Requests](#pull-requests) + - [Development Procedure](#development-procedure) + - [Testing](#testing) + - [Pull Requests](#pull-requests) - [Pull Request Templates](#pull-request-templates) - [Requesting Reviews](#requesting-reviews) - - [Reviewing Pull Requests](#reviewing-pull-requests) - [Updating Documentation](#updating-documentation) - - [Forking](#forking) - [Dependencies](#dependencies) - [Protobuf](#protobuf) - - [Testing](#testing) - [Branching Model and Release](#branching-model-and-release) - [PR Targeting](#pr-targeting) - - [Development Procedure](#development-procedure) - - [Pull Merge Procedure](#pull-merge-procedure) - - [Release Procedure](#release-procedure) - - [Point Release Procedure](#point-release-procedure) - [Code Owner Membership](#code-owner-membership) + - [Concept & Feature Approval Process](#concept--feature-approval-process) -Thank you for considering making contributions to Cosmos-SDK and related -repositories! +Thank you for considering making contributions to the Cosmos SDK and related repositories! Contributing to this repo can mean many things such as participating in discussion or proposing code changes. To ensure a smooth workflow for all contributors, the general procedure for contributing has been established: -<<<<<<< HEAD -1. Either [open](https://github.com/cosmos/cosmos-sdk/issues/new/choose) or - [find](https://github.com/cosmos/cosmos-sdk/issues) an issue you'd like to help with -2. Participate in thoughtful discussion on that issue -3. If you would like to contribute: - 1. If the issue is a proposal, ensure that the proposal has been accepted -======= 1. Start by browsing [new issues](https://github.com/cosmos/cosmos-sdk/issues) and [discussions](https://github.com/cosmos/cosmos-sdk/discussions). If you are looking for something interesting or if you have something in your mind, there is a chance it was has been discussed. - Looking for a good place to start contributing? How about checking out some [good first issues](https://github.com/cosmos/cosmos-sdk/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)? @@ -44,45 +33,18 @@ contributors, the general procedure for contributing has been established: 3. Participate in thoughtful discussion on that issue. 4. If you would like to contribute: 1. Ensure that the proposal has been accepted. ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) 2. Ensure that nobody else has already begun working on this issue. If they have, - make sure to contact them to collaborate + make sure to contact them to collaborate. 3. If nobody has been assigned for the issue and you would like to work on it, make a comment on the issue to inform the community of your intentions -<<<<<<< HEAD - to begin work - 4. Follow standard GitHub best practices: fork the repo, branch from the - HEAD of `master`, make some commits, and submit a PR to `master` - - For core developers working within the cosmos-sdk repo, to ensure a clear - ownership of branches, branches must be named with the convention - `{moniker}/{issue#}-branch-name` - 5. Be sure to submit the PR in `Draft` mode submit your PR early, even if - it's incomplete as this indicates to the community you're working on - something and allows them to provide comments early in the development process - 6. When the code is complete it can be marked `Ready for Review` - 7. Be sure to include a relevant change log entry in the `Unreleased` section - of `CHANGELOG.md` (see file for log format) - -Note that for very small or blatantly obvious problems (such as typos) it is -======= to begin work. 5. To submit your work as a contribution to the repository follow standard GitHub best practices. See [pull request guideline](#pull-requests) below. **Note:** For very small or blatantly obvious problems such as typos, you are ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) not required to an open issue to submit a PR, but be aware that for more complex problems/features, if a PR is opened before an adequate design discussion has taken place in a GitHub issue, that PR runs a high likelihood of being rejected. -<<<<<<< HEAD -Other notes: - -- Looking for a good place to start contributing? How about checking out some - [good first issues](https://github.com/cosmos/cosmos-sdk/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) -- Please make sure to run `make format` before every commit - the easiest way - to do this is have your editor run it for you upon saving a file. Additionally - please ensure that your code is lint compliant by running `make lint-fix`. -======= ## Teams Dev Calls The Cosmos SDK has many stakeholders contributing and shaping the project. Regen Network Development leads the Cosmos SDK R&D, and welcomes long-term contributors and additional maintainers from other projects. We use self-organizing principles to coordinate and collaborate across organizations in structured "Working Groups" that focus on specific problem domains or architectural components of the Cosmos SDK. @@ -119,20 +81,18 @@ When proposing an architecture decision for the Cosmos SDK, please start by open to do this is have your editor run it for you upon saving a file (most of the editors will do it anyway using a pre-configured setup of the programming language mode). Additionally, be sure that your code is lint compliant by running `make lint-fix`. ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) A convenience git `pre-commit` hook that runs the formatters automatically before each commit is available in the `contrib/githooks/` directory. +- Follow the [CODING GUIDELINES](CODING_GUIDELINES.md), which defines criteria for designing and coding a software. -## Architecture Decision Records (ADR) +Code is merged into master through pull request procedure. + +### Testing -<<<<<<< HEAD -When proposing an architecture decision for the SDK, please create an [ADR](./docs/architecture/README.md) -so further discussions can be made. We are following this process so all involved parties are in -agreement before any party begins coding the proposed implementation. If you would like to see some examples -of how these are written refer to the current [ADRs](https://github.com/cosmos/cosmos-sdk/tree/master/docs/architecture). +Tests can be executed by running `make test` at the top level of the Cosmos SDK repository. + +### Pull Requests -## Pull Requests -======= Before submitting a pull request: - merge the latest master `git merge origin/master`, @@ -145,86 +105,63 @@ Then: 2. When the code is complete, change your PR from `Draft` to `Ready for Review`. 3. Go through the actions for each checkbox present in the PR template description. The PR actions are automatically provided for each new PR. 4. Be sure to include a relevant changelog entry in the `Unreleased` section of `CHANGELOG.md` (see file for log format). ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) -PRs should be categorically broken up based on the type of changes being made (i.e. `fix`, `feat`, -`refactor`, `docs`, etc.). The *type* must be included in the PR title as a prefix (e.g. -`fix: `). This ensures that all changes committed to the base branch follow the +PRs must have a category prefix that is based on the type of changes being made (for example, `fix`, `feat`, +`refactor`, `docs`, and so on). The *type* must be included in the PR title as a prefix (for example, +`fix: `). This convention ensures that all changes that are committed to the base branch follow the [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/) specification. Additionally, each PR should only address a single issue. +Pull requests are merged automatically using [`automerge` action](https://mergify.io/features/auto-merge). + +NOTE: when merging, GitHub will squash commits and rebase on top of the master. + ### Pull Request Templates -There are currently three PR templates. The [default template](./.github/PULL_REQUEST_TEMPLATE.md) is for types `fix`, `feat`, and `refactor`. We also have a [docs template](./.github/PULL_REQUEST_TEMPLATE/docs.md) for documentation changes and an [other template](./.github/PULL_REQUEST_TEMPLATE/other.md) for changes that do not affect production code. When previewing a PR before it has been opened, you can change the template by adding one of the following parameters to the url: +There are three PR templates. The [default template](./.github/PULL_REQUEST_TEMPLATE.md) is for types `fix`, `feat`, and `refactor`. We also have a [docs template](./.github/PULL_REQUEST_TEMPLATE/docs.md) for documentation changes and an [other template](./.github/PULL_REQUEST_TEMPLATE/other.md) for changes that do not affect production code. When previewing a PR before it has been opened, you can change the template by adding one of the following parameters to the url: - `template=docs.md` - `template=other.md` ### Requesting Reviews -In order to accomodate the review process, the author of the PR must complete the author checklist +In order to accommodate the review process, the author of the PR must complete the author checklist +(from the pull request template) to the best of their abilities before marking the PR as "Ready for Review". If you would like to receive early feedback on the PR, open the PR as a "Draft" and leave a comment in the PR indicating that you would like early feedback and tagging whoever you would like to receive feedback from. -### Reviewing Pull Requests +Codeowners are marked automatically as the reviewers. -All PRs require at least two reviews before they can be merged (one review might be acceptable in -the case of minor changes to [docs](./.github/PULL_REQUEST_TEMPLATE/docs.md) or [other](./.github/PULL_REQUEST_TEMPLATE/other.md) changes that do not affect production code). Each PR template has a -reviewers checklist that must be completed before the PR can be merged. Each reviewer is responsible +All PRs require at least two review approvals before they can be merged (one review might be acceptable in +the case of minor changes to [docs](./.github/PULL_REQUEST_TEMPLATE/docs.md) or [other](./.github/PULL_REQUEST_TEMPLATE/other.md) changes that do not affect production code). Each PR template has a reviewers checklist that must be completed before the PR can be merged. Each reviewer is responsible for all checked items unless they have indicated otherwise by leaving their handle next to specific -items. In addition, please use the following review explanations: +items. In addition, use the following review explanations: - `LGTM` without an explicit approval means that the changes look good, but you haven't thoroughly reviewed the reviewer checklist items. -- `Approval` means that you have completed some or all of the reviewer checklist items. If you only reviewed selected items, you have added your handle next to the items that you have reviewed. In addition, please follow these guidelines: +- `Approval` means that you have completed some or all of the reviewer checklist items. If you only reviewed selected items, you must add your handle next to the items that you have reviewed. In addition, follow these guidelines: - You must also think through anything which ought to be included but is not - You must think through whether any added code could be partially combined (DRYed) with existing code - You must think through any potential security issues or incentive-compatibility flaws introduced by the changes - Naming must be consistent with conventions and the rest of the codebase - - Code must live in a reasonable location, considering dependency structures (e.g. not importing testing modules in production code, or including example code modules in production code). - - If you approve of the PR, you are responsible for any issues mentioned here and any issues that should have been addressed after thoroughly reviewing the reviewer checklist items in the pull request template. -- If you sat down with the PR submitter and did a pairing review please note that in the `Approval`, or your PR comments. -- If you are only making "surface level" reviews, submit any notes as `Comments` without adding a review. + - Code must live in a reasonable location, considering dependency structures (for example, not importing testing modules in production code, or including example code modules in production code). + - If you approve the PR, you are responsible for any issues mentioned here and any issues that should have been addressed after thoroughly reviewing the reviewer checklist items in the pull request template. +- If you sat down with the PR submitter and did a pairing review, add this information in the `Approval` or your PR comments. +- If you are only making "surface level" reviews, submit notes as a `comment` review. ### Updating Documentation If you open a PR on the Cosmos SDK, it is mandatory to update the relevant documentation in `/docs`. -- If your change relates to the core SDK (baseapp, store, ...), please update the `docs/basics/`, `docs/core/` and/or `docs/building-modules/` folders. -- If your changes relate to the core of the CLI (not specifically to module's CLI/Rest), please modify the `docs/run-node/` folder. -- If your changes relate to a module, please update the module's spec in `x/moduleName/docs/spec/`. +- If your change relates to the core SDK (baseapp, store, ...), be sure to update the content in `docs/basics/`, `docs/core/` and/or `docs/building-modules/` folders. +- If your changes relate to the core of the CLI (not specifically to module's CLI/Rest), then modify the content in the `docs/run-node/` folder. +- If your changes relate to a module, then be sure to update the module's spec in `x/moduleName/docs/spec/`. When writing documentation, follow the [Documentation Writing Guidelines](./docs/DOC_WRITING_GUIDELINES.md). -## Forking - -Please note that Go requires code to live under absolute paths, which complicates forking. -While my fork lives at `https://github.com/rigeyrigerige/cosmos-sdk`, -the code should never exist at `$GOPATH/src/github.com/rigeyrigerige/cosmos-sdk`. -Instead, we use `git remote` to add the fork as a new remote for the original repo, -`$GOPATH/src/github.com/cosmos/cosmos-sdk`, and do all the work there. - -For instance, to create a fork and work on a branch of it, I would: - -- Create the fork on GitHub, using the fork button. -- Go to the original repo checked out locally (i.e. `$GOPATH/src/github.com/cosmos/cosmos-sdk`) -- `git remote rename origin upstream` -- `git remote add origin git@github.com:rigeyrigerige/cosmos-sdk.git` - -Now `origin` refers to my fork and `upstream` refers to the Cosmos-SDK version. -So I can `git push -u origin master` to update my fork, and make pull requests to Cosmos-SDK from there. -Of course, replace `rigeyrigerige` with your git handle. - -To pull in updates from the origin repo, run - -- `git fetch upstream` -- `git rebase upstream/master` (or whatever branch you want) - -Please don't make Pull Requests from `master`. - ## Dependencies -We use [Go 1.14 Modules](https://github.com/golang/go/wiki/Modules) to manage +We use [Go Modules](https://github.com/golang/go/wiki/Modules) to manage dependency versions. The master branch of every Cosmos repository should just build with `go get`, @@ -236,7 +173,7 @@ build, in which case we can fall back on `go mod tidy -v`. ## Protobuf -We use [Protocol Buffers](https://developers.google.com/protocol-buffers) along with [gogoproto](https://github.com/gogo/protobuf) to generate code for use in Cosmos-SDK. +We use [Protocol Buffers](https://developers.google.com/protocol-buffers) along with [gogoproto](https://github.com/gogo/protobuf) to generate code for use in Cosmos SDK. For determinstic behavior around Protobuf tooling, everything is containerized using Docker. Make sure to have Docker installed on your machine, or head to [Docker's website](https://docs.docker.com/get-docker/) to install it. @@ -263,125 +200,21 @@ For example, in vscode your `.vscode/settings.json` should look like: } ``` -## Testing - -Tests can be ran by running `make test` at the top level of the SDK repository. - -We expect tests to use `require` or `assert` rather than `t.Skip` or `t.Fail`, -unless there is a reason to do otherwise. -When testing a function under a variety of different inputs, we prefer to use -[table driven tests](https://github.com/golang/go/wiki/TableDrivenTests). -Table driven test error messages should follow the following format -`, tc #, i #`. -`` is an optional short description of whats failing, `tc` is the -index within the table of the testcase that is failing, and `i` is when there -is a loop, exactly which iteration of the loop failed. -The idea is you should be able to see the -error message and figure out exactly what failed. -Here is an example check: - -```go - -for tcIndex, tc := range cases { - - for i := 0; i < tc.numTxsToTest; i++ { - - require.Equal(t, expectedTx[:32], calculatedTx[:32], - "First 32 bytes of the txs differed. tc #%d, i #%d", tcIndex, i) -``` - ## Branching Model and Release -User-facing repos should adhere to the trunk based development branching model: https://trunkbaseddevelopment.com/. +User-facing repos should adhere to the trunk based development branching model: https://trunkbaseddevelopment.com/. User branches should start with a user name, example: `{moniker}/{issue#}-branch-name`. -Libraries need not follow the model strictly, but would be wise to. +The Cosmos SDK repository is a [multi Go module](https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository) repository. It means that we have more than one Go module in a single repository. -The SDK utilizes [semantic versioning](https://semver.org/). +The Cosmos SDK utilizes [semantic versioning](https://semver.org/). ### PR Targeting Ensure that you base and target your PR on the `master` branch. -All feature additions should be targeted against `master`. Bug fixes for an outstanding release candidate -should be targeted against the release candidate branch. - -### Development Procedure - -- the latest state of development is on `master` -- `master` must never fail `make lint test test-race` -- `master` should not fail `make lint` -- no `--force` onto `master` (except when reverting a broken commit, which should seldom happen) -- create a development branch either on github.com/cosmos/cosmos-sdk, or your fork (using `git remote add origin`) -- before submitting a pull request, begin `git rebase` on top of `master` - -### Pull Merge Procedure - -- ensure pull branch is rebased on `master` -- run `make test` to ensure that all tests pass -- merge pull request - -### Release Procedure - -- Start on `master` -- Create the release candidate branch `rc/v*` (going forward known as **RC**) - and ensure it's protected against pushing from anyone except the release - manager/coordinator - - **no PRs targeting this branch should be merged unless exceptional circumstances arise** -- On the `RC` branch, prepare a new version section in the `CHANGELOG.md` - - All links must be link-ified: `$ python ./scripts/linkify_changelog.py CHANGELOG.md` - - Copy the entries into a `RELEASE_CHANGELOG.md`, this is needed so the bot knows which entries to add to the release page on GitHub. -- Kick off a large round of simulation testing (e.g. 400 seeds for 2k blocks) -- If errors are found during the simulation testing, commit the fixes to `master` - and create a new `RC` branch (making sure to increment the `rcN`) -- After simulation has successfully completed, create the release branch - (`release/vX.XX.X`) from the `RC` branch -- Create a PR to `master` to incorporate the `CHANGELOG.md` updates -- Tag the release (use `git tag -a`) and create a release in GitHub -- Delete the `RC` branches - -### Point Release Procedure - -At the moment, only a single major release will be supported, so all point releases will be based -off of that release. - -In order to alleviate the burden for a single person to have to cherry-pick and handle merge conflicts -of all desired backporting PRs to a point release, we instead maintain a living backport branch, where -all desired features and bug fixes are merged into as separate PRs. - -Example: - -Current release is `v0.38.4`. We then maintain a (living) branch `sru/release/v0.38.N`, given N as -the next patch release number (currently `0.38.5`) for the `0.38` release series. As bugs are fixed -and PRs are merged into `master`, if a contributor wishes the PR to be released as SRU into the -`v0.38.N` point release, the contributor must: - -1. Add `0.38.N-backport` label -2. Pull latest changes on the desired `sru/release/vX.X.N` branch -3. Create a 2nd PR merging the respective SRU PR into `sru/release/v0.38.N` -4. Update the PR's description and ensure it contains the following information: - - **[Impact]** Explanation of how the bug affects users or developers. - - **[Test Case]** section with detailed instructions on how to reproduce the bug. - - **[Regression Potential]** section with a discussion how regressions are most likely to manifest, or might - manifest even if it's unlikely, as a result of the change. **It is assumed that any SRU candidate PR is - well-tested before it is merged in and has an overall low risk of regression**. - -It is the PR's author's responsibility to fix merge conflicts, update changelog entries, and -ensure CI passes. If a PR originates from an external contributor, it may be a core team member's -responsibility to perform this process instead of the original author. -Lastly, it is core team's responsibility to ensure that the PR meets all the SRU criteria. - -Finally, when a point release is ready to be made: - -1. Create `release/v0.38.N` branch -2. Ensure changelog entries are verified - 1. Be sure changelog entries are added to `RELEASE_CHANGELOG.md` -3. Add release version date to the changelog -4. Push release branch along with the annotated tag: **git tag -a** -5. Create a PR into `master` containing ONLY `CHANGELOG.md` updates - 1. Do not push `RELEASE_CHANGELOG.md` to `master` - -Note, although we aim to support only a single release at a time, the process stated above could be -used for multiple previous versions. +All feature additions and all bug fixes must be targeted against `master`. Exception is for bug fixes which are only related to a released version. In that case, the related bug fix PRs must target against the release branch. + +If needed, we backport a commit from `master` to a release branch (excluding consensus breaking feature, API breaking and similar). ## Code Owner Membership @@ -417,10 +250,10 @@ Other potential removal criteria: * Violation of Code of Conduct Earning this privilege should be considered to be no small feat and is by no -means guaranteed by any quantifiable metric. It is a symbol of great trust of +means guaranteed by any quantifiable metric. Serving as a code owner is a symbol of great trust from the community of this project. -## Concept & Release Approval Process +## Concept & Feature Approval Process The process for how Cosmos SDK maintainers take features and ADRs from concept to release is broken up into three distinct stages: **Strategy Discovery**, **Concept Approval**, and @@ -428,7 +261,7 @@ is broken up into three distinct stages: **Strategy Discovery**, **Concept Appro ### Strategy Discovery -* Develop long term priorities, strategy and roadmap for the SDK +* Develop long term priorities, strategy and roadmap for the Cosmos SDK * Release committee not yet defined as there is already a roadmap that can be used for the time being ### Concept Approval @@ -459,7 +292,7 @@ should convene to rectify the situation by either: **Approval Committee & Decision Making** -In absense of general consensus, decision making requires 1/2 vote from the two members +In absence of general consensus, decision making requires 1/2 vote from the two members of the **Concept Approval Committee**. **Committee Members** @@ -472,7 +305,7 @@ Members must: * Participate in all or almost all ADR discussions, both on GitHub as well as in bi-weekly Architecture Review meetings -* Be active contributors to the SDK, and furthermore should be continuously making substantial contributions +* Be active contributors to the Cosmos SDK, and furthermore should be continuously making substantial contributions to the project's codebase, review process, documentation and ADRs * Have stake in the Cosmos SDK project, represented by: * Being a client / user of the Comsos SDK @@ -496,6 +329,6 @@ well as for PRs made as part of a release process: * Code reviewers should have more senior engineering capability * 1/2 approval is required from the **primary repo maintainers** in `CODEOWNERS` -*Note: For any major or minor release series denoted as a "Stable Release" (e.g. v0.39 "Launchpad"), a separate release +**Note**: For any major release series denoted as a "Stable Release" (e.g. v0.42 "Stargate"), a separate release committee is often established. Stable Releases, and their corresponding release committees are documented -separately in [STABLE_RELEASES.md](./STABLE_RELEASES.md)* +separately in [Stable Release Policy](./RELEASE_PROCESS.md#stable-release-policy)* diff --git a/STABLE_RELEASES.md b/STABLE_RELEASES.md deleted file mode 100644 index 55fd004415e3..000000000000 --- a/STABLE_RELEASES.md +++ /dev/null @@ -1,215 +0,0 @@ -# Stable Releases - -*Stable Release Series* continue to receive bug fixes until they reach **End Of Life**. - -<<<<<<< HEAD:STABLE_RELEASES.md -Only the following release series are currently supported and receive bug fixes: -======= -## Major Release Procedure - -A _major release_ is an increment of the first number (eg: `v1.2` → `v2.0.0`) or the _point number_ (eg: `v1.1 → v1.2.0`, also called _point release_). Each major release opens a _stable release series_ and receives updates outlined in the [Major Release Maintenance](#major-release-maintenance)_section. - -Before making a new _major_ release we do beta and release candidate releases. For example, for release 1.0.0: - -``` -v1.0.0-beta1 → v1.0.0-beta2 → ... → v1.0.0-rc1 → v1.0.0-rc2 → ... → v1.0.0 -``` - -- Release a first beta version on the `master` branch and freeze `master` from receiving any new features. After beta is released, we focus on releasing the release candidate: - - finish audits and reviews - - kick off a large round of simulation testing (e.g. 400 seeds for 2k blocks) - - perform functional tests - - add more tests - - release new beta version as the bugs are discovered and fixed. -- After the team feels that the `master` works fine we create a `release/vY` branch (going forward known a release branch), where `Y` is the version number, with the patch part substituted to `x` (eg: 0.42.x, 1.0.x). Ensure the release branch is protected so that pushes against the release branch are permitted only by the release manager or release coordinator. - - **PRs targeting this branch can be merged _only_ when exceptional circumstances arise** - - update the GitHub mergify integration by adding instructions for automatically backporting commits from `master` to the `release/vY` using the `backport/Y` label. -- In the release branch, prepare a new version section in the `CHANGELOG.md` - - All links must be link-ified: `$ python ./scripts/linkify_changelog.py CHANGELOG.md` - - Copy the entries into a `RELEASE_CHANGELOG.md`, this is needed so the bot knows which entries to add to the release page on GitHub. -- Create a new annotated git tag for a release candidate (eg: `git tag -a v1.1.0-rc1`) in the release branch. - - from this point we unfreeze master. - - the SDK teams collaborate and do their best to run testnets in order to validate the release. - - when bugs are found, create a PR for `master`, and backport fixes to the release branch. - - create new release candidate tags after bugs are fixed. -- After the team feels the release branch is stable and everything works, create a full release: - - update `CHANGELOG.md`. - - create a new annotated git tag (eg `git -a v1.1.0`) in the release branch. - - Create a GitHub release. - -Following _semver_ philosophy, point releases after `v1.0`: - -- must not break API -- can break consensus - -Before `v1.0`, point release can break both point API and consensus. - -## Patch Release Procedure - -A _patch release_ is an increment of the patch number (eg: `v1.2.0` → `v1.2.1`). - -**Patch release must not break API nor consensus.** - -Updates to the release branch should come from `master` by backporting PRs (usually done by automatic cherry pick followed by a PRs to the release branch). The backports must be marked using `backport/Y` label in PR for master. -It is the PR author's responsibility to fix merge conflicts, update changelog entries, and -ensure CI passes. If a PR originates from an external contributor, a core team member assumes -responsibility to perform this process instead of the original author. -Lastly, it is core team's responsibility to ensure that the PR meets all the SRU criteria. - -Point Release must follow the [Stable Release Policy](#stable-release-policy). - -After the release branch has all commits required for the next patch release: - -- update `CHANGELOG.md`. -- create a new annotated git tag (eg `git -a v1.1.0`) in the release branch. -- Create a GitHub release. - -## Major Release Maintenance - -Major Release series continue to receive bug fixes (released as a Patch Release) until they reach **End Of Life**. -Major Release series is maintained in compliance with the **Stable Release Policy** as described in this document. -Note: not every Major Release is denoted as stable releases. - -Only the following major release series have a stable release status: ->>>>>>> 479485f95 (style: lint go and markdown (#10060)):RELEASE_PROCESS.md - -* **0.42 «Stargate»** will be supported until 6 months after **0.43.0** is published. A fairly strict **bugfix-only** rule applies to pull requests that are requested to be included into a stable point-release. -* **0.43 «Stargate»** is the latest stable release. - -<<<<<<< HEAD:STABLE_RELEASES.md -The **0.43 «Stargate»** release series is maintained in compliance with the **Stable Release Policy** as described in this document. - -======= ->>>>>>> 479485f95 (style: lint go and markdown (#10060)):RELEASE_PROCESS.md -## Stable Release Policy - -This policy presently applies *only* to the following release series: - -* **0.43 «Stargate»** - -### Point Releases - -Once a Cosmos-SDK release has been completed and published, updates for it are released under certain circumstances -and must follow the [Point Release Procedure](CONTRIBUTING.md). - -### Rationale - -Unlike in-development `master` branch snapshots, **Cosmos-SDK** releases are subject to much wider adoption, -and by a significantly different demographic of users. During development, changes in the `master` branch -affect SDK users, application developers, early adopters, and other advanced users that elect to use -unstable experimental software at their own risk. - -Conversely, users of a stable release expect a high degree of stability. They build their applications on it, and the -problems they experience with it could be potentially highly disruptive to their projects. - -Stable release updates are recommended to the vast majority of developers, and so it is crucial to treat them -with great caution. Hence, when updates are proposed, they must be accompanied by a strong rationale and present -a low risk of regressions, i.e. even one-line changes could cause unexpected regressions due to side effects or -poorly tested code. We never assume that any change, no matter how little or non-intrusive, is completely exempt -of regression risks. - -Therefore, the requirements for stable changes are different than those that are candidates to be merged in -the `master` branch. When preparing future major releases, our aim is to design the most elegant, user-friendly and -maintainable SDK possible which often entails fundamental changes to the SDK's architecture design, rearranging and/or -renaming packages as well as reducing code duplication so that we maintain common functions and data structures in one -place rather than leaving them scattered all over the code base. However, once a release is published, the -priority is to minimise the risk caused by changes that are not strictly required to fix qualifying bugs; this tends to -be correlated with minimising the size of such changes. As such, the same bug may need to be fixed in different -ways in stable releases and `master` branch. - -### Migrations - -To smoothen the update to the latest stable release, the SDK includes a set of CLI commands for managing migrations between SDK versions, under the `migrate` subcommand. Only migration scripts between stable releases are included. For the current release, **0.42 «Stargate»** and later migrations are supported. - -### What qualifies as a Stable Release Update (SRU) - -* **High-impact bugs** - * Bugs that may directly cause a security vulnerability. - * *Severe regressions* from a Cosmos-SDK's previous release. This includes all sort of issues - that may cause the core packages or the `x/` modules unusable. - * Bugs that may cause **loss of user's data**. -* Other safe cases: - * Bugs which don't fit in the aforementioned categories for which an obvious safe patch is known. - * Relatively small yet strictly non-breaking features with strong support from the community. - * Relatively small yet strictly non-breaking changes that introduce forward-compatible client - features to smoothen the migration to successive releases. - * Relatively small yet strictly non-breaking CLI improvements. - -### What does not qualify as SRU - -* State machine changes. -* Breaking changes in Protobuf definitions, as specified in [ADR-044](./docs/architecture/adr-044-protobuf-updates-guidelines.md). -* Changes that introduces API breakages (e.g. public functions and interfaces removal/renaming). -* Client-breaking changes in gRPC and HTTP request and response types. -* CLI-breaking changes. -* Cosmetic fixes, such as formatting or linter warning fixes. - -## What pull requests will be included in stable point-releases - -Pull requests that fix bugs and add features that fall in the following categories do not require a **Stable Release Exception** to be granted to be included in a stable point-release: - -* **Severe regressions**. -* Bugs that may cause **client applications** to be **largely unusable**. -* Bugs that may cause **state corruption or data loss**. -* Bugs that may directly or indirectly cause a **security vulnerability**. -* Non-breaking features that are strongly requested by the community. -* Non-breaking CLI improvements that are strongly requested by the community. - -## What pull requests will NOT be automatically included in stable point-releases - -As rule of thumb, the following changes will **NOT** be automatically accepted into stable point-releases: - -* **State machine changes**. -* **Protobug-breaking changes**, as specified in [ADR-044](./docs/architecture/adr-044-protobuf-updates- guidelines.md). -* **Client-breaking changes**, i.e. changes that prevent gRPC, HTTP and RPC clients to continue interacting with the node without any change. -* **API-breaking changes**, i.e. changes that prevent client applications to *build without modifications* to the client application's source code. -* **CLI-breaking changes**, i.e. changes that require usage changes for CLI users. - - In some circumstances, PRs that don't meet the aforementioned criteria might be raised and asked to be granted a *Stable Release Exception*. - -## Stable Release Exception - Procedure - -1. Check that the bug is either fixed or not reproducible in `master`. It is, in general, not appropriate to release bug fixes for stable releases without first testing them in `master`. Please apply the label [v0.43](https://github.com/cosmos/cosmos-sdk/milestone/26) to the issue. -2. Add a comment to the issue and ensure it contains the following information (see the bug template below): - -* **[Impact]** An explanation of the bug on users and justification for backporting the fix to the stable release. -* A **[Test Case]** section containing detailed instructions on how to reproduce the bug. -* A **[Regression Potential]** section with a clear assessment on how regressions are most likely to manifest as a result of the pull request that aims to fix the bug in the target stable release. - -3. **Stable Release Managers** will review and discuss the PR. Once *consensus* surrounding the rationale has been reached and the technical review has successfully concluded, the pull request will be merged in the respective point-release target branch (e.g. `release/v0.43.x`) and the PR included in the point-release's respective milestone (e.g. `v0.43.5`). - -### Stable Release Exception - Bug template - -``` -#### Impact - -Brief xplanation of the effects of the bug on users and a justification for backporting the fix to the stable release. - -#### Test Case - -Detailed instructions on how to reproduce the bug on Stargate's most recently published point-release. - -#### Regression Potential - -Explanation on how regressions might manifest - even if it's unlikely. -It is assumed that stable release fixes are well-tested and they come with a low risk of regressions. -It's crucial to make the effort of thinking about what could happen in case a regression emerges. -``` - -## Stable Release Managers - -The **Stable Release Managers** evaluate and approve or reject updates and backports to Cosmos-SDK Stable Release series, -according to the [stable release policy](#stable-release-policy) and [release procedure](#stable-release-exception-procedure). -Decisions are made by consensus. - -Their responsibilites include: - -* Driving the Stable Release Exception process. -* Approving/rejecting proposed changes to a stable release series. -* Executing the release process of stable point-releases in compliance with the [Point Release Procedure](CONTRIBUTING.md). - -The Stable Release Managers are appointed by the Interchain Foundation. Currently residing Stable Release Managers: - -* @clevinson - Cory Levinson -* @amaurym - Amaury Martiny -* @robert-zaremba - Robert Zaremba diff --git a/contrib/rosetta/README.md b/contrib/rosetta/README.md index a05446ea94f1..f408729581da 100644 --- a/contrib/rosetta/README.md +++ b/contrib/rosetta/README.md @@ -17,15 +17,11 @@ Contains the required files to set up rosetta cli and make it work against its w ## node -<<<<<<< HEAD -Contains the files for a deterministic network, with fixed keys and some actions on there, to test parsing of msgs and historical balances. -======= Contains the files for a deterministic network, with fixed keys and some actions on there, to test parsing of msgs and historical balances. This image is used to run a simapp node and to run the rosetta server. ## Rosetta-cli The docker image for ./rosetta-cli/Dockerfile is on [docker hub](https://hub.docker.com/r/tendermintdev/rosetta-cli). Whenever rosetta-cli releases a new version, rosetta-cli/Dockerfile should be updated to reflect the new version and pushed to docker hub. ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) ## Notes diff --git a/cosmovisor/README.md b/cosmovisor/README.md index 07c63d73fb1c..e263966a49a0 100644 --- a/cosmovisor/README.md +++ b/cosmovisor/README.md @@ -1,57 +1,12 @@ # Cosmosvisor Quick Start -<<<<<<< HEAD `cosmovisor` is a small process manager for Cosmos SDK application binaries that monitors the governance module via stdout for incoming chain upgrade proposals. If it sees a proposal that gets approved, `cosmovisor` can automatically download the new binary, stop the current binary, switch from the old binary to the new one, and finally restart the node with the new binary. -======= -`cosmovisor` is a small process manager for Cosmos SDK application binaries that monitors the governance module for incoming chain upgrade proposals. If it sees a proposal that gets approved, `cosmovisor` can automatically download the new binary, stop the current binary, switch from the old binary to the new one, and finally restart the node with the new binary. - -#### Design - -Cosmovisor is designed to be used as a wrapper for a `Cosmos SDK` app: - -* it will pass arguments to the associated app (configured by `DAEMON_NAME` env variable). - Running `cosmovisor run arg1 arg2 ....` will run `app arg1 arg2 ...`; -* it will manage an app by restarting and upgrading if needed; -* it is configured using environment variables, not positional arguments. ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) *Note: If new versions of the application are not set up to run in-place store migrations, migrations will need to be run manually before restarting `cosmovisor` with the new binary. For this reason, we recommend applications adopt in-place store migrations.* ## Installation -<<<<<<< HEAD To install `cosmovisor`, run the following command: -======= -## Setup - -### Installation - -To install the latest version of `cosmovisor`, run the following command: - -``` -go install github.com/cosmos/cosmos-sdk/cosmovisor/cmd/cosmovisor@latest -``` - -To install a previous version, you can specify the version. IMPORTANT: Chains that use Cosmos-SDK v0.42.x and want to use auto-download feature MUST use Cosmovisor v0.1.0 - -``` -go install github.com/cosmos/cosmos-sdk/cosmovisor/cmd/cosmovisor@v0.1.0 -``` - -It is possible to confirm the version of cosmovisor when using Cosmovisor v1.0.0, but it is not possible to do so with `v0.1.0`. - -You can also install from source by pulling the cosmos-sdk repository and switching to the correct version and building as follows: - -``` -git clone git@github.com:cosmos/cosmos-sdk -cd cosmos-sdk -git checkout cosmovisor/vx.x.x -cd cosmovisor -make -``` - -This will build cosmovisor in your current directory. Afterwards you may want to put it into your machine's PATH like as follows: ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) ``` go get github.com/cosmos/cosmos-sdk/cosmovisor/cmd/cosmovisor @@ -59,21 +14,7 @@ go get github.com/cosmos/cosmos-sdk/cosmovisor/cmd/cosmovisor ## Command Line Arguments And Environment Variables -<<<<<<< HEAD All arguments passed to `cosmovisor` will be passed to the application binary (as a subprocess). `cosmovisor` will return `/dev/stdout` and `/dev/stderr` of the subprocess as its own. For this reason, `cosmovisor` cannot accept any command-line arguments other than those available to the application binary, nor will it print anything to output other than what is printed by the application binary. -======= -### Command Line Arguments And Environment Variables - -The first argument passed to `cosmovisor` is the action for `cosmovisor` to take. Options are: - -* `help`, `--help`, or `-h` - Output `cosmovisor` help information and check your `cosmovisor` configuration. -* `run` - Run the configured binary using the rest of the provided arguments. -* `version`, or `--version` - Output the `cosmovisor` version and also run the binary with the `version` argument. - -All arguments passed to `cosmovisor run` will be passed to the application binary (as a subprocess). `cosmovisor` will return `/dev/stdout` and `/dev/stderr` of the subprocess as its own. For this reason, `cosmovisor run` cannot accept any command-line arguments other than those available to the application binary. - -*Note: Use of `cosmovisor` without one of the action arguments is deprecated. For backwards compatability, if the first argument is not an action argument, `run` is assumed. However, this fallback might be removed in future versions, so it is recommended that you always provide `run`. ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) `cosmovisor` reads its configuration from environment variables: @@ -125,24 +66,7 @@ In order to support downloadable binaries, a tarball for each upgrade binary wil The `DAEMON` specific code and operations (e.g. tendermint config, the application db, syncing blocks, etc.) all work as expected. The application binaries' directives such as command-line flags and environment variables also work as expected. -<<<<<<< HEAD ## Auto-Download -======= -### Detecting Upgrades - -`cosmovisor` is polling the `$DAEMON_HOME/data/upgrade-info.json` file for new upgrade instructions. The file is created by the x/upgrade module in `BeginBlocker` when an upgrade is detected and the blockchain reaches the upgrade height. -The following heuristic is applied to detect the upgrade: - -+ When starting, `cosmovisor` doesn't know much about currently running upgrade, except the binary which is `current/bin/`. It tries to read the `current/update-info.json` file to get information about the current upgrade name. -+ If neither `cosmovisor/current/upgrade-info.json` nor `data/upgrade-info.json` exist, then `cosmovisor` will wait for `data/upgrade-info.json` file to trigger an upgrade. -+ If `cosmovisor/current/upgrade-info.json` doesn't exist but `data/upgrade-info.json` exists, then `cosmovisor` assumes that whatever is in `data/upgrade-info.json` is a valid upgrade request. In this case `cosmovisor` tries immediately to make an upgrade according to the `name` attribute in `data/upgrade-info.json`. -+ Otherwise, `cosmovisor` waits for changes in `upgrade-info.json`. As soon as a new upgrade name is recorded in the file, `cosmovisor` will trigger an upgrade mechanism. - -When the upgrade mechanism is triggered, `cosmovisor` will: - -1. if `DAEMON_ALLOW_DOWNLOAD_BINARIES` is enabled, start by auto-downloading a new binary into `cosmovisor//bin` (where `` is the `upgrade-info.json:name` attribute); -2. update the `current` symbolic link to point to the new directory and save `data/upgrade-info.json` to `cosmovisor/current/upgrade-info.json`. ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) Generally, `cosmovisor` requires that the system administrator place all relevant binaries on disk before the upgrade happens. However, for people who don't need such control and want an easier setup (maybe they are syncing a non-validating fullnode and want to do little maintenance), there is another option. diff --git a/crypto/keyring/keyring.go b/crypto/keyring/keyring.go index ab68f92fe612..f96c8635243c 100644 --- a/crypto/keyring/keyring.go +++ b/crypto/keyring/keyring.go @@ -475,10 +475,6 @@ func (ks keystore) List() ([]Info, error) { return nil, err } -<<<<<<< HEAD -======= - var res []*Record //nolint:prealloc ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) sort.Strings(keys) for _, key := range keys { diff --git a/docs/DOCS_README.md b/docs/DOCS_README.md index a2e8da3ce89b..59bedcca3cff 100644 --- a/docs/DOCS_README.md +++ b/docs/DOCS_README.md @@ -1,15 +1,5 @@ # Updating the docs -<<<<<<< HEAD -If you want to open a PR on the Cosmos SDK to update the documentation, please follow the guidelines in the [`CONTRIBUTING.md`](https://github.com/cosmos/cosmos-sdk/tree/master/CONTRIBUTING.md#updating-documentation) - -## Translating - -- Docs translations live in a `docs/country-code/` folder, where `country-code` stands for the country code of the language used (`cn` for Chinese, `kr` for Korea, `fr` for France, ...). -- Always translate content living on `master`. -- Only content under `/docs/intro/`, `/docs/basics/`, `/docs/core/`, `/docs/building-modules/` and `docs/run-node/` needs to be translated, as well as `docs/README.md`. It is also nice (but not mandatory) to translate `/docs/spec/`. -- Specify the release/tag of the translation in the README of your translation folder. Update the release/tag each time you update the translation. -======= If you want to open a PR in Cosmos SDK to update the documentation, please follow the guidelines in [`CONTRIBUTING.md`](https://github.com/cosmos/cosmos-sdk/tree/master/CONTRIBUTING.md#updating-documentation). ## Internationalization @@ -26,7 +16,6 @@ If you want to open a PR in Cosmos SDK to update the documentation, please follo - Each `docs//` folder must also have a `README.md` that includes a translated version of both the layout and content within the root-level [`README.md`](https://github.com/cosmos/cosmos-sdk/tree/master/docs/README.md). The layout defined in the `README.md` is used to build the homepage. - Always translate content living on `master` unless you are revising documentation for a specific release. Translated documentation like the root-level documentation is semantically versioned. - For additional configuration options, please see [VuePress Internationalization](https://vuepress.vuejs.org/guide/i18n.html). ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) ## Docs Build Workflow diff --git a/docs/architecture/adr-038-state-listening.md b/docs/architecture/adr-038-state-listening.md index 0d32eac126f3..9bc644dddb26 100644 --- a/docs/architecture/adr-038-state-listening.md +++ b/docs/architecture/adr-038-state-listening.md @@ -207,40 +207,18 @@ func (rs *Store) CacheMultiStore() types.CacheMultiStore { We will introduce a new `StreamingService` interface for exposing `WriteListener` data streams to external consumers. ```go -<<<<<<< HEAD // Hook interface used to hook into the ABCI message processing of the BaseApp type Hook interface { ListenBeginBlock(ctx sdk.Context, req abci.RequestBeginBlock, res abci.ResponseBeginBlock) // update the streaming service with the latest BeginBlock messages ListenEndBlock(ctx sdk.Context, req abci.RequestEndBlock, res abci.ResponseEndBlock) // update the steaming service with the latest EndBlock messages ListenDeliverTx(ctx sdk.Context, req abci.RequestDeliverTx, res abci.ResponseDeliverTx) // update the steaming service with the latest DeliverTx messages -======= -// ABCIListener interface used to hook into the ABCI message processing of the BaseApp -type ABCIListener interface { - // ListenBeginBlock updates the streaming service with the latest BeginBlock messages - ListenBeginBlock(ctx types.Context, req abci.RequestBeginBlock, res abci.ResponseBeginBlock) error - // ListenEndBlock updates the steaming service with the latest EndBlock messages - ListenEndBlock(ctx types.Context, req abci.RequestEndBlock, res abci.ResponseEndBlock) error - // ListenDeliverTx updates the steaming service with the latest DeliverTx messages - ListenDeliverTx(ctx types.Context, req abci.RequestDeliverTx, res abci.ResponseDeliverTx) error ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) } // StreamingService interface for registering WriteListeners with the BaseApp and updating the service with the ABCI messages using the hooks type StreamingService interface { -<<<<<<< HEAD Stream(wg *sync.WaitGroup, quitChan <-chan struct{}) // streaming service loop, awaits kv pairs and writes them to some destination stream or file Listeners() map[sdk.StoreKey][]storeTypes.WriteListener // returns the streaming service's listeners for the BaseApp to register Hook -======= - // Stream is the streaming service loop, awaits kv pairs and writes them to some destination stream or file - Stream(wg *sync.WaitGroup) error - // Listeners returns the streaming service's listeners for the BaseApp to register - Listeners() map[types.StoreKey][]store.WriteListener - // ABCIListener interface for hooking into the ABCI messages from inside the BaseApp - ABCIListener - // Closer interface - io.Closer ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) } ``` @@ -585,7 +563,6 @@ func NewSimApp( // configure state listening capabilities using AppOptions listeners := cast.ToStringSlice(appOpts.Get("store.streamers")) for _, listenerName := range listeners { -<<<<<<< HEAD // get the store keys allowed to be exposed for this streaming service/state listeners exposeKeyStrs := cast.ToStringSlice(appOpts.Get(fmt.Sprintf("streamers.%s.keys", listenerName)) exposeStoreKeys = make([]storeTypes.StoreKey, 0, len(exposeKeyStrs)) @@ -593,26 +570,6 @@ func NewSimApp( if storeKey, ok := keys[keyStr]; ok { exposeStoreKeys = append(exposeStoreKeys, storeKey) } -======= - // get the store keys allowed to be exposed for this streaming service - exposeKeyStrs := cast.ToStringSlice(appOpts.Get(fmt.Sprintf("streamers.%s.keys", streamerName))) - var exposeStoreKeys []sdk.StoreKey - if exposeAll(exposeKeyStrs) { // if list contains `*`, expose all StoreKeys - exposeStoreKeys = make([]sdk.StoreKey, 0, len(keys)) - for _, storeKey := range keys { - exposeStoreKeys = append(exposeStoreKeys, storeKey) - } - } else { - exposeStoreKeys = make([]sdk.StoreKey, 0, len(exposeKeyStrs)) - for _, keyStr := range exposeKeyStrs { - if storeKey, ok := keys[keyStr]; ok { - exposeStoreKeys = append(exposeStoreKeys, storeKey) - } - } - } - if len(exposeStoreKeys) == 0 { // short circuit if we are not exposing anything - continue ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) } // get the constructor for this listener name constructor, err := baseapp.NewStreamingServiceConstructor(listenerName) diff --git a/docs/architecture/adr-040-storage-and-smt-state-commitments.md b/docs/architecture/adr-040-storage-and-smt-state-commitments.md index 6b9549b86bd2..115723576091 100644 --- a/docs/architecture/adr-040-storage-and-smt-state-commitments.md +++ b/docs/architecture/adr-040-storage-and-smt-state-commitments.md @@ -110,96 +110,6 @@ We need to be able to process transactions and roll-back state updates if a tran We identified use-cases, where modules will need to save an object commitment without storing an object itself. Sometimes clients are receiving complex objects, and they have no way to prove a correctness of that object without knowing the storage layout. For those use cases it would be easier to commit to the object without storing it directly. -<<<<<<< HEAD -======= -### Refactor MultiStore - -The Stargate `/store` implementation (store/v1) adds an additional layer in the SDK store construction - the `MultiStore` structure. The multistore exists to support the modularity of the Cosmos SDK - each module is using its own instance of IAVL, but in the current implementation, all instances share the same database. The latter indicates, however, that the implementation doesn't provide true modularity. Instead it causes problems related to race condition and atomic DB commits (see: [\#6370](https://github.com/cosmos/cosmos-sdk/issues/6370) and [discussion](https://github.com/cosmos/cosmos-sdk/discussions/8297#discussioncomment-757043)). - -We propose to reduce the multistore concept from the SDK, and to use a single instance of `SC` and `SS` in a `RootStore` object. To avoid confusion, we should rename the `MultiStore` interface to `RootStore`. The `RootStore` will have the following interface; the methods for configuring tracing and listeners are omitted for brevity. - -```go -// Used where read-only access to versions is needed. -type BasicRootStore interface { - Store - GetKVStore(StoreKey) KVStore - CacheRootStore() CacheRootStore -} - -// Used as the main app state, replacing CommitMultiStore. -type CommitRootStore interface { - BasicRootStore - Committer - Snapshotter - - GetVersion(uint64) (BasicRootStore, error) - SetInitialVersion(uint64) error - - ... // Trace and Listen methods -} - -// Replaces CacheMultiStore for branched state. -type CacheRootStore interface { - BasicRootStore - Write() - - ... // Trace and Listen methods -} - -// Example of constructor parameters for the concrete type. -type RootStoreConfig struct { - Upgrades *StoreUpgrades - InitialVersion uint64 - - ReservePrefix(StoreKey, StoreType) -} -``` - - - - -In contrast to `MultiStore`, `RootStore` doesn't allow to dynamically mount sub-stores or provide an arbitrary backing DB for individual sub-stores. - -NOTE: modules will be able to use a special commitment and their own DBs. For example: a module which will use ZK proofs for state can store and commit this proof in the `RootStore` (usually as a single record) and manage the specialized store privately or using the `SC` low level interface. - -#### Compatibility support - -To ease the transition to this new interface for users, we can create a shim which wraps a `CommitMultiStore` but provides a `CommitRootStore` interface, and expose functions to safely create and access the underlying `CommitMultiStore`. - -The new `RootStore` and supporting types can be implemented in a `store/v2` package to avoid breaking existing code. - -#### Merkle Proofs and IBC - -Currently, an IBC (v1.0) Merkle proof path consists of two elements (`["", ""]`), with each key corresponding to a separate proof. These are each verified according to individual [ICS-23 specs](https://github.com/cosmos/ibc-go/blob/f7051429e1cf833a6f65d51e6c3df1609290a549/modules/core/23-commitment/types/merkle.go#L17), and the result hash of each step is used as the committed value of the next step, until a root commitment hash is obtained. -The root hash of the proof for `""` is hashed with the `""` to validate against the App Hash. - -This is not compatible with the `RootStore`, which stores all records in a single Merkle tree structure, and won't produce separate proofs for the store- and record-key. Ideally, the store-key component of the proof could just be omitted, and updated to use a "no-op" spec, so only the record-key is used. However, because the IBC verification code hardcodes the `"ibc"` prefix and applies it to the SDK proof as a separate element of the proof path, this isn't possible without a breaking change. Breaking this behavior would severely impact the Cosmos ecosystem which already widely adopts the IBC module. Requesting an update of the IBC module across the chains is a time consuming effort and not easily feasible. - -As a workaround, the `RootStore` will have to use two separate SMTs (they could use the same underlying DB): one for IBC state and one for everything else. A simple Merkle map that reference these SMTs will act as a Merkle Tree to create a final App hash. The Merkle map is not stored in a DBs - it's constructed in the runtime. The IBC substore key must be `"ibc"`. - -The workaround can still guarantee atomic syncs: the [proposed DB backends](#evaluated-kv-databases) support atomic transactions and efficient rollbacks, which will be used in the commit phase. - -The presented workaround can be used until the IBC module is fully upgraded to supports single-element commitment proofs. - -### Optimization: compress module key prefixes - -We consider a compression of prefix keys by creating a mapping from module key to an integer, and serializing the integer using varint coding. Varint coding assures that different values don't have common byte prefix. For Merkle Proofs we can't use prefix compression - so it should only apply for the `SS` keys. Moreover, the prefix compression should be only applied for the module namespace. More precisely: - -+ each module has it's own namespace; -+ when accessing a module namespace we create a KVStore with embedded prefix; -+ that prefix will be compressed only when accessing and managing `SS`. - -We need to assure that the codes won't change. We can fix the mapping in a static variable (provided by an app) or SS state under a special key. - -TODO: need to make decision about the key compression. - -## Optimization: SS key compression - -Some objects may be saved with key, which contains a Protobuf message type. Such keys are long. We could save a lot of space if we can map Protobuf message types in varints. - -TODO: finalize this or move to another ADR. - ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) ## Consequences ### Backwards Compatibility diff --git a/docs/migrations/rest.md b/docs/migrations/rest.md index dc767358239a..7dd2832b30a7 100644 --- a/docs/migrations/rest.md +++ b/docs/migrations/rest.md @@ -102,8 +102,4 @@ Previously, some modules exposed legacy `POST` endpoints to generate unsigned tr ## Migrating to gRPC -<<<<<<< HEAD -Instead of hitting REST endpoints as described in the previous paragraph, the SDK also exposes a gRPC server. Any client can use gRPC instead of REST to interact with the node. An overview of different ways to communicate with a node can be found [here](../core/grpc_rest.md), and a concrete tutorial for setting up a gRPC client [here](../run-node/txs.md#programmatically-with-go). -======= Instead of hitting REST endpoints as described above, the Cosmos SDK also exposes a gRPC server. Any client can use gRPC instead of REST to interact with the node. An overview of different ways to communicate with a node can be found [here](../core/grpc_rest.md), and a concrete tutorial for setting up a gRPC client can be found [here](../run-node/txs.md#programmatically-with-go). ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) diff --git a/docs/run-node/rosetta.md b/docs/run-node/rosetta.md index 36ad7c14af7e..49f64866568c 100644 --- a/docs/run-node/rosetta.md +++ b/docs/run-node/rosetta.md @@ -1,8 +1,5 @@ # Rosetta -<<<<<<< HEAD -Package rosetta implements the rosetta API for the current cosmos sdk release series. -======= The `rosetta` package implements Coinbase's [Rosetta API](https://www.rosetta-api.org). This document provides instructions on how to use the Rosetta API integration. For information about the motivation and design choices, refer to [ADR 035](../architecture/adr-035-rosetta-api-support.md). ## Add Rosetta Command @@ -55,7 +52,6 @@ appd rosetta --grpc "gRPC endpoint (ex: localhost:9090)" --addr "rosetta binding address (ex: :8080)" ``` ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) ## Extension diff --git a/docs/run-node/run-node.md b/docs/run-node/run-node.md index 9bc97e2e2f2e..ae38f67c2e2e 100644 --- a/docs/run-node/run-node.md +++ b/docs/run-node/run-node.md @@ -39,8 +39,6 @@ The `~/.simapp` folder has the following structure: |- priv_validator_key.json # Private key to use as a validator in the consensus protocol. ``` -<<<<<<< HEAD -======= ## Updating Some Default Settings If you want to change any field values in configuration files (for ex: genesis.json) you can use `jq` ([installation](https://stedolan.github.io/jq/download/) & [docs](https://stedolan.github.io/jq/manual/#Assignment)) & `sed` commands to do that. Few examples are listed here. @@ -61,7 +59,6 @@ jq '.app_state.mint.minter.inflation = "0.300000000000000000"' genesis.json > te ## Adding Genesis Accounts ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) Before starting the chain, you need to populate the state with at least one account. To do so, first [create a new account in the keyring](./keyring.md#adding-keys-to-the-keyring) named `my_validator` under the `test` keyring backend (feel free to choose another name and another backend). Now that you have created a local account, go ahead and grant it some `stake` tokens in your chain's genesis file. Doing so will also make sure your chain is aware of this account's existence: diff --git a/go.mod b/go.mod index 5acfde188543..61eaf04b23d8 100644 --- a/go.mod +++ b/go.mod @@ -56,79 +56,8 @@ require ( gopkg.in/yaml.v2 v2.4.0 ) -<<<<<<< HEAD -======= -require ( - filippo.io/edwards25519 v1.0.0-beta.2 // indirect - github.com/ChainSafe/go-schnorrkel v0.0.0-20200405005733-88cbf1b4c40d // indirect - github.com/DataDog/zstd v1.4.5 // indirect - github.com/Workiva/go-datastructures v1.0.52 // indirect - github.com/beorn7/perks v1.0.1 // indirect - github.com/cespare/xxhash v1.1.0 // indirect - github.com/cespare/xxhash/v2 v2.1.1 // indirect - github.com/cosmos/ledger-go v0.9.2 // indirect - github.com/danieljoos/wincred v1.0.2 // indirect - github.com/davecgh/go-spew v1.1.1 // indirect - github.com/desertbit/timer v0.0.0-20180107155436-c41aec40b27f // indirect - github.com/dgraph-io/badger/v2 v2.2007.2 // indirect - github.com/dgraph-io/ristretto v0.1.0 // indirect - github.com/dgryski/go-farm v0.0.0-20200201041132-a6ae2369ad13 // indirect - github.com/dustin/go-humanize v1.0.0 // indirect - github.com/dvsekhvalnov/jose2go v0.0.0-20200901110807-248326c1351b // indirect - github.com/felixge/httpsnoop v1.0.1 // indirect - github.com/fsnotify/fsnotify v1.5.1 // indirect - github.com/go-kit/kit v0.10.0 // indirect - github.com/go-logfmt/logfmt v0.5.0 // indirect - github.com/godbus/dbus v0.0.0-20190726142602-4481cbc300e2 // indirect - github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b // indirect - github.com/golang/snappy v0.0.3 // indirect - github.com/google/btree v1.0.0 // indirect - github.com/google/orderedcode v0.0.1 // indirect - github.com/gorilla/websocket v1.4.2 // indirect - github.com/gsterjov/go-libsecret v0.0.0-20161001094733-a6f4afe4910c // indirect - github.com/gtank/merlin v0.1.1 // indirect - github.com/gtank/ristretto255 v0.1.2 // indirect - github.com/hashicorp/go-immutable-radix v1.0.0 // indirect - github.com/hashicorp/hcl v1.0.0 // indirect - github.com/inconshreveable/mousetrap v1.0.0 // indirect - github.com/jmhodges/levigo v1.0.0 // indirect - github.com/keybase/go-keychain v0.0.0-20190712205309-48d3d31d256d // indirect - github.com/klauspost/compress v1.12.3 // indirect - github.com/lib/pq v1.10.2 // indirect - github.com/libp2p/go-buffer-pool v0.0.2 // indirect - github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect - github.com/mimoo/StrobeGo v0.0.0-20181016162300-f8f6d4d2b643 // indirect - github.com/minio/highwayhash v1.0.1 // indirect - github.com/mitchellh/mapstructure v1.4.2 // indirect - github.com/mtibben/percent v0.2.1 // indirect - github.com/pelletier/go-toml v1.9.4 // indirect - github.com/petermattis/goid v0.0.0-20180202154549-b0b1615b78e5 // indirect - github.com/pmezard/go-difflib v1.0.0 // indirect - github.com/prometheus/client_model v0.2.0 // indirect - github.com/prometheus/procfs v0.6.0 // indirect - github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0 // indirect - github.com/rs/cors v1.7.0 // indirect - github.com/sasha-s/go-deadlock v0.2.1-0.20190427202633-1595213edefa // indirect - github.com/spf13/afero v1.6.0 // indirect - github.com/spf13/jwalterweatherman v1.1.0 // indirect - github.com/subosito/gotenv v1.2.0 // indirect - github.com/syndtr/goleveldb v1.0.1-0.20200815110645-5c35d600f0ca // indirect - github.com/tecbot/gorocksdb v0.0.0-20191217155057-f0fad39f321c // indirect - github.com/zondax/hid v0.9.0 // indirect - go.etcd.io/bbolt v1.3.5 // indirect - golang.org/x/net v0.0.0-20210903162142-ad29c8ab022f // indirect - golang.org/x/sys v0.0.0-20210903071746-97244b99971b // indirect - golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1 // indirect - golang.org/x/text v0.3.6 // indirect - gopkg.in/ini.v1 v1.63.2 // indirect - gopkg.in/yaml.v2 v2.4.0 // indirect - gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect - nhooyr.io/websocket v1.8.6 // indirect -) - // latest grpc doesn't work with with our modified proto compiler, so we need to enforce // the following version across all dependencies. ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) replace google.golang.org/grpc => google.golang.org/grpc v1.33.2 replace github.com/gogo/protobuf => github.com/regen-network/protobuf v1.3.3-alpha.regen.1 diff --git a/x/auth/ante/sigverify.go b/x/auth/ante/sigverify.go index ad7f1024440c..8ff8ee8d98d7 100644 --- a/x/auth/ante/sigverify.go +++ b/x/auth/ante/sigverify.go @@ -228,12 +228,7 @@ func OnlyLegacyAminoSigners(sigData signing.SignatureData) bool { } } -<<<<<<< HEAD:x/auth/ante/sigverify.go func (svd SigVerificationDecorator) AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (newCtx sdk.Context, err error) { -======= -func (svd sigVerificationTxHandler) sigVerify(ctx context.Context, tx sdk.Tx, isReCheckTx, simulate bool) error { - sdkCtx := sdk.UnwrapSDKContext(ctx) ->>>>>>> 479485f95 (style: lint go and markdown (#10060)):x/auth/middleware/sigverify.go // no need to verify signatures on recheck tx if ctx.IsReCheckTx() { return next(ctx, tx, simulate) @@ -258,11 +253,7 @@ func (svd sigVerificationTxHandler) sigVerify(ctx context.Context, tx sdk.Tx, is } for i, sig := range sigs { -<<<<<<< HEAD:x/auth/ante/sigverify.go acc, err := GetSignerAcc(ctx, svd.ak, signerAddrs[i]) -======= - acc, err := GetSignerAcc(sdkCtx, svd.ak, signerAddrs[i]) ->>>>>>> 479485f95 (style: lint go and markdown (#10060)):x/auth/middleware/sigverify.go if err != nil { return ctx, err } diff --git a/x/auth/spec/01_concepts.md b/x/auth/spec/01_concepts.md index f723751f06a2..e028ebe8e9a3 100644 --- a/x/auth/spec/01_concepts.md +++ b/x/auth/spec/01_concepts.md @@ -4,8 +4,6 @@ order: 1 # Concepts -<<<<<<< HEAD -======= **Note:** The auth module is different from the [authz module](../modules/authz/). The differences are: @@ -13,7 +11,6 @@ The differences are: * `auth` - authentication of accounts and transactions for Cosmos SDK applications and is responsible for specifying the base transaction and account types. * `authz` - authorization for accounts to perform actions on behalf of other accounts and enables a granter to grant authorizations to a grantee that allows the grantee to execute messages on behalf of the granter. ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) ## Gas & Fees Fees serve two purposes for an operator of the network. diff --git a/x/auth/spec/05_vesting.md b/x/auth/spec/05_vesting.md index 399a6d3be02a..7519cb2f24b8 100644 --- a/x/auth/spec/05_vesting.md +++ b/x/auth/spec/05_vesting.md @@ -614,8 +614,5 @@ linearly over time. all coins at a given time. - PeriodicVestingAccount: A vesting account implementation that vests coins according to a custom vesting schedule. -<<<<<<< HEAD -======= - PermanentLockedAccount: It does not ever release coins, locking them indefinitely. Coins in this account can still be used for delegating and for governance votes even while locked. ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) diff --git a/x/bank/spec/README.md b/x/bank/spec/README.md index dd7f8df8aba2..9a1a0afb6edc 100644 --- a/x/bank/spec/README.md +++ b/x/bank/spec/README.md @@ -100,9 +100,3 @@ The available permissions are: 4. **[Events](04_events.md)** - [Handlers](04_events.md#handlers) 5. **[Parameters](05_params.md)** -<<<<<<< HEAD -======= -6. **[Client](06_client.md)** - - [CLI](06_client.md#cli) - - [gRPC](06_client.md#grpc) ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) diff --git a/x/distribution/spec/README.md b/x/distribution/spec/README.md index ac641ab84103..ea56f59e6f1a 100644 --- a/x/distribution/spec/README.md +++ b/x/distribution/spec/README.md @@ -42,7 +42,7 @@ following rewards between validators and associated delegators: Fees are pooled within a global pool, as well as validator specific proposer-reward pools. The mechanisms used allow for validators and delegators -to independently and lazily withdraw their rewards. +to independently and lazily withdraw their rewards. ## Shortcomings @@ -101,9 +101,3 @@ to set up a script to periodically withdraw and rebond rewards. - [BeginBlocker](06_events.md#beginblocker) - [Handlers](06_events.md#handlers) 7. **[Parameters](07_params.md)** -<<<<<<< HEAD -======= -8. **[Parameters](07_params.md)** - - [CLI](08_client.md#cli) - - [gRPC](08_client.md#grpc) ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) diff --git a/x/feegrant/spec/README.md b/x/feegrant/spec/README.md index 155485e0c81a..b1bd7febfd61 100644 --- a/x/feegrant/spec/README.md +++ b/x/feegrant/spec/README.md @@ -30,9 +30,3 @@ This module allows accounts to grant fee allowances and to use fees from their a - [MsgGrantAllowance](04_events.md#msggrantallowance) - [MsgRevokeAllowance](04_events.md#msgrevokeallowance) - [Exec fee allowance](04_events.md#exec-fee-allowance) -<<<<<<< HEAD -======= -5. **[Client](05_client.md)** - - [CLI](05_client.md#cli) - - [gRPC](05_client.md#grpc) ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) diff --git a/x/gov/spec/01_concepts.md b/x/gov/spec/01_concepts.md index 36a97c0ddba6..29e582990b09 100644 --- a/x/gov/spec/01_concepts.md +++ b/x/gov/spec/01_concepts.md @@ -66,15 +66,8 @@ Once the proposal's deposit reaches `MinDeposit`, it enters voting period. If pr When a the a proposal finalized, the coins from the deposit are either refunded or burned, according to the final tally of the proposal: -<<<<<<< HEAD - If the proposal is approved or if it's rejected but _not_ vetoed, deposits will automatically be refunded to their respective depositor (transferred from the governance `ModuleAccount`). - When the proposal is vetoed with a supermajority, deposits be burned from the governance `ModuleAccount`. -======= -- If the proposal is approved or rejected but _not_ vetoed, each deposit will be automatically refunded to its respective depositor (transferred from the governance `ModuleAccount`). -- When the proposal is vetoed with a supermajority, deposits will be burned from the governance `ModuleAccount` and the proposal information along with its deposit information will be removed from state. -- All refunded or burned deposits are removed from the state. Events are issued when burning or refunding a deposit. -- NOTE: The proposals which completed the voting period, cannot return the deposits when queried. ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) ## Vote diff --git a/x/slashing/spec/README.md b/x/slashing/spec/README.md index 4fbf184be78d..b67c6f7c7dcf 100644 --- a/x/slashing/spec/README.md +++ b/x/slashing/spec/README.md @@ -43,10 +43,7 @@ This module will be used by the Cosmos Hub, the first hub in the Cosmos ecosyste 7. **[Staking Tombstone](07_tombstone.md)** - [Abstract](07_tombstone.md#abstract) 8. **[Parameters](08_params.md)** -<<<<<<< HEAD -======= 9. **[Client](09_client.md)** - [CLI](09_client.md#cli) - [gRPC](09_client.md#grpc) - [REST](09_client.md#rest) ->>>>>>> 479485f95 (style: lint go and markdown (#10060)) From ae8645b82932380d56c3671b9a7eb06e606e40c2 Mon Sep 17 00:00:00 2001 From: Robert Zaremba Date: Thu, 11 Nov 2021 21:07:39 +0100 Subject: [PATCH 3/3] remove unnecessary files --- db/README.md | 72 --- docs/architecture/adr-043-nft-module.md | 340 ------------- .../adr-044-protobuf-updates-guidelines.md | 109 ---- docs/architecture/adr-046-module-params.md | 184 ------- store/streaming/README.md | 67 --- store/streaming/file/README.md | 66 --- store/v2/flat/store.go | 479 ------------------ store/v2/smt/store.go | 99 ---- x/auth/middleware/basic.go | 358 ------------- x/epoching/keeper/keeper.go | 192 ------- x/epoching/spec/03_to_improve.md | 44 -- x/group/internal/orm/spec/01_table.md | 40 -- 12 files changed, 2050 deletions(-) delete mode 100644 db/README.md delete mode 100644 docs/architecture/adr-043-nft-module.md delete mode 100644 docs/architecture/adr-044-protobuf-updates-guidelines.md delete mode 100644 docs/architecture/adr-046-module-params.md delete mode 100644 store/streaming/README.md delete mode 100644 store/streaming/file/README.md delete mode 100644 store/v2/flat/store.go delete mode 100644 store/v2/smt/store.go delete mode 100644 x/auth/middleware/basic.go delete mode 100644 x/epoching/keeper/keeper.go delete mode 100644 x/epoching/spec/03_to_improve.md delete mode 100644 x/group/internal/orm/spec/01_table.md diff --git a/db/README.md b/db/README.md deleted file mode 100644 index 01471f144c61..000000000000 --- a/db/README.md +++ /dev/null @@ -1,72 +0,0 @@ -# Key-Value Database - -Databases supporting mappings of arbitrary byte sequences. - -## Interfaces - -The database interface types consist of objects to encapsulate the singular connection to the DB, transactions being made to it, historical version state, and iteration. - -### `DBConnection` - -This interface represents a connection to a versioned key-value database. All versioning operations are performed using methods on this type. - -* The `Versions` method returns a `VersionSet` which represents an immutable view of the version history at the current state. -* Version history is modified via the `{Save,Delete}Version` methods. -* Operations on version history do not modify any database contents. - -### `DBReader`, `DBWriter`, and `DBReadWriter` - -These types represent transactions on the database contents. Their methods provide CRUD operations as well as iteration. - -* Writeable transactions call `Commit` flushes operations to the source DB. -* All open transactions must be closed with `Discard` or `Commit` before a new version can be saved on the source DB. -* The maximum number of safely concurrent transactions is dependent on the backend implementation. -* A single transaction object is not safe for concurrent use. -* Write conflicts on concurrent transactions will cause an error at commit time (optimistic concurrency control). - -#### `Iterator` - -* An iterator is invalidated by any writes within its `Domain` to the source transaction while it is open. -* An iterator must call `Close` before its source transaction is closed. - -### `VersionSet` - -This represents a self-contained and immutable view of a database's version history state. It is therefore safe to retain and conccurently access any instance of this object. - -## Implementations - -### In-memory DB - -The in-memory DB in the `db/memdb` package cannot be persisted to disk. It is implemented using the Google [btree](https://pkg.go.dev/github.com/google/btree) library. - -* This currently does not perform write conflict detection, so it only supports a single open write-transaction at a time. Multiple and concurrent read-transactions are supported. - -### BadgerDB - -A [BadgerDB](https://pkg.go.dev/github.com/dgraph-io/badger/v3)-based backend. Internally, this uses BadgerDB's ["managed" mode](https://pkg.go.dev/github.com/dgraph-io/badger/v3#OpenManaged) for version management. -Note that Badger only recognizes write conflicts for rows that are read _after_ a conflicting transaction was opened. In other words, the following will raise an error: - -```go -tx1, tx2 := db.Writer(), db.ReadWriter() -key := []byte("key") -tx2.Get(key) -tx1.Set(key, []byte("a")) -tx2.Set(key, []byte("b")) -tx1.Commit() // ok -err := tx2.Commit() // err is non-nil -``` - -But this will not: - -```go -tx1, tx2 := db.Writer(), db.ReadWriter() -key := []byte("key") -tx1.Set(key, []byte("a")) -tx2.Set(key, []byte("b")) -tx1.Commit() // ok -tx2.Commit() // ok -``` - -### RocksDB - -A [RocksDB](https://github.com/facebook/rocksdb)-based backend. Internally this uses [`OptimisticTransactionDB`](https://github.com/facebook/rocksdb/wiki/Transactions#optimistictransactiondb) to allow concurrent transactions with write conflict detection. Historical versioning is internally implemented with [Checkpoints](https://github.com/facebook/rocksdb/wiki/Checkpoints). diff --git a/docs/architecture/adr-043-nft-module.md b/docs/architecture/adr-043-nft-module.md deleted file mode 100644 index 99152f990e61..000000000000 --- a/docs/architecture/adr-043-nft-module.md +++ /dev/null @@ -1,340 +0,0 @@ -# ADR 43: NFT Module - -## Changelog - -- 05.05.2021: Initial Draft -- 07.01.2021: Incorporate Billy's feedback -- 07.02.2021: Incorporate feedbacks from Aaron, Shaun, Billy et al. - -## Status - -DRAFT - -## Abstract - -This ADR defines the `x/nft` module which is a generic implementation of NFTs, roughly "compatible" with ERC721. **Applications using the `x/nft` module must implement the following functions**: - -- `MsgNewClass` - Receive the user's request to create a class, and call the `NewClass` of the `x/nft` module. -- `MsgUpdateClass` - Receive the user's request to update a class, and call the `UpdateClass` of the `x/nft` module. -- `MsgMintNFT` - Receive the user's request to mint a nft, and call the `MintNFT` of the `x/nft` module. -- `BurnNFT` - Receive the user's request to burn a nft, and call the `BurnNFT` of the `x/nft` module. -- `UpdateNFT` - Receive the user's request to update a nft, and call the `UpdateNFT` of the `x/nft` module. - -## Context - -NFTs are more than just crypto art, which is very helpful for accruing value to the Cosmos ecosystem. As a result, Cosmos Hub should implement NFT functions and enable a unified mechanism for storing and sending the ownership representative of NFTs as discussed in https://github.com/cosmos/cosmos-sdk/discussions/9065. - -As was discussed in [#9065](https://github.com/cosmos/cosmos-sdk/discussions/9065), several potential solutions can be considered: - -- irismod/nft and modules/incubator/nft -- CW721 -- DID NFTs -- interNFT - -Since functions/use cases of NFTs are tightly connected with their logic, it is almost impossible to support all the NFTs' use cases in one Cosmos SDK module by defining and implementing different transaction types. - -Considering generic usage and compatibility of interchain protocols including IBC and Gravity Bridge, it is preferred to have a generic NFT module design which handles the generic NFTs logic. - -This design idea can enable composability that application-specific functions should be managed by other modules on Cosmos Hub or on other Zones by importing the NFT module. - -The current design is based on the work done by [IRISnet team](https://github.com/irisnet/irismod/tree/master/modules/nft) and an older implementation in the [Cosmos repository](https://github.com/cosmos/modules/tree/master/incubator/nft). - -## Decision - -We will create a module `x/nft`, which contains the following functionality: - -- Store NFTs and track their ownership. -- Expose `Keeper` interface for composing modules to mint and burn NFTs. -- Expose external `Message` interface for users to transfer ownership of their NFTs. -- Query NFTs and their supply information. - -### Types - -#### Class - -We define a model for NFT **Class**, which is comparable to an ERC721 Contract on Ethereum, under which a collection of NFTs can be created and managed. - -```protobuf -message Class { - string id = 1; - string name = 2; - string symbol = 3; - string description = 4; - string uri = 5; - string uri_hash = 6; -} -``` - -- `id` is an alphanumeric identifier of the NFT class; it is used as the primary index for storing the class; _required_ -- `name` is a descriptive name of the NFT class; _optional_ -- `symbol` is the symbol usually shown on exchanges for the NFT class; _optional_ -- `description` is a detailed description of the NFT class; _optional_ -- `uri` is a URL pointing to an off-chain JSON file that contains metadata about this NFT class ([OpenSea example](https://docs.opensea.io/docs/contract-level-metadata)); _optional_ -- `uri_hash` is a hash of the `uri`; _optional_ - -#### NFT - -We define a general model for `NFT` as follows. - -```protobuf -message NFT { - string class_id = 1; - string id = 2; - string uri = 3; - string uri_hash = 4; - google.protobuf.Any data = 10; -} -``` - -- `class_id` is the identifier of the NFT class where the NFT belongs; _required_ -- `id` is an alphanumeric identifier of the NFT, unique within the scope of its class. It is specified by the creator of the NFT and may be expanded to use DID in the future. `class_id` combined with `id` uniquely identifies an NFT and is used as the primary index for storing the NFT; _required_ - - ``` - {class_id}/{id} --> NFT (bytes) - ``` - -- `uri` is a URL pointing to an off-chain JSON file that contains metadata about this NFT (Ref: [ERC721 standard and OpenSea extension](https://docs.opensea.io/docs/metadata-standards)); _required_ -- `uri_hash` is a hash of the `uri`; -- `data` is a field that CAN be used by composing modules to specify additional properties for the NFT; _optional_ - -This ADR doesn't specify values that `data` can take; however, best practices recommend upper-level NFT modules clearly specify their contents. Although the value of this field doesn't provide the additional context required to manage NFT records, which means that the field can technically be removed from the specification, the field's existence allows basic informational/UI functionality. - -### `Keeper` Interface - -```go -type Keeper interface { - NewClass(class Class) - UpdateClass(class Class) - - Mint(nft NFT,receiver sdk.AccAddress) // updates totalSupply - Burn(classId string, nftId string) // updates totalSupply - Update(nft NFT) - Transfer(classId string, nftId string, receiver sdk.AccAddress) - - GetClass(classId string) Class - GetClasses() []Class - - GetNFT(classId string, nftId string) NFT - GetNFTsOfClassByOwner(classId string, owner sdk.AccAddress) []NFT - GetNFTsOfClass(classId string) []NFT - - GetOwner(classId string, nftId string) sdk.AccAddress - GetBalance(classId string, owner sdk.AccAddress) uint64 - GetTotalSupply(classId string) uint64 -} -``` - -Other business logic implementations should be defined in composing modules that import `x/nft` and use its `Keeper`. - -### `Msg` Service - -```protobuf -service Msg { - rpc Send(MsgSend) returns (MsgSendResponse); -} - -message MsgSend { - string class_id = 1; - string id = 2; - string sender = 3; - string reveiver = 4; -} -message MsgSendResponse {} -``` - -`MsgSend` can be used to transfer the ownership of an NFT to another address. - -The implementation outline of the server is as follows: - -```go -type msgServer struct{ - k Keeper -} - -func (m msgServer) Send(ctx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) { - // check current ownership - assertEqual(msg.Sender, m.k.GetOwner(msg.ClassId, msg.Id)) - - // transfer ownership - m.k.Transfer(msg.ClassId, msg.Id, msg.Receiver) - - return &types.MsgSendResponse{}, nil -} -``` - -The query service methods for the `x/nft` module are: - -```proto -service Query { - - // Balance queries the number of NFTs of a given class owned by the owner, same as balanceOf in ERC721 - rpc Balance(QueryBalanceRequest) returns (QueryBalanceResponse) { - option (google.api.http).get = "/cosmos/nft/v1beta1/balance/{class_id}/{owner}"; - } - - // Owner queries the owner of the NFT based on its class and id, same as ownerOf in ERC721 - rpc Owner(QueryOwnerRequest) returns (QueryOwnerResponse) { - option (google.api.http).get = "/cosmos/nft/v1beta1/owner/{class_id}/{id}"; - } - - // Supply queries the number of NFTs of a given class, same as totalSupply in ERC721Enumerable - rpc Supply(QuerySupplyRequest) returns (QuerySupplyResponse) { - option (google.api.http).get = "/cosmos/nft/v1beta1/supply/{class_id}"; - } - - // NFTsOfClassByOwner queries the NFTs of a given class owned by the owner, similar to tokenOfOwnerByIndex in ERC721Enumerable - rpc NFTsOfClassByOwner(QueryNFTsOfClassByOwnerRequest) returns (QueryNFTsResponse) { - option (google.api.http).get = "/cosmos/nft/v1beta1/owned_nfts/{class_id}/{owner}"; - } - - // NFTsOfClass queries all NFTs of a given class, similar to tokenByIndex in ERC721Enumerable - rpc NFTsOfClass(QueryNFTsOfClassRequest) returns (QueryNFTsResponse) { - option (google.api.http).get = "/cosmos/nft/v1beta1/nfts/{class_id}"; - } - - // NFT queries an NFT based on its class and id. - rpc NFT(QueryNFTRequest) returns (QueryNFTResponse) { - option (google.api.http).get = "/cosmos/nft/v1beta1/nfts/{class_id}/{id}"; - } - - // Class queries an NFT class based on its id - rpc Class(QueryClassRequest) returns (QueryClassResponse) { - option (google.api.http).get = "/cosmos/nft/v1beta1/classes/{class_id}"; - } - - // Classes queries all NFT classes - rpc Classes(QueryClassesRequest) returns (QueryClassesResponse) { - option (google.api.http).get = "/cosmos/nft/v1beta1/classes"; - } -} - -// QueryBalanceRequest is the request type for the Query/Balance RPC method -message QueryBalanceRequest { - string class_id = 1; - string owner = 2; -} - -// QueryBalanceResponse is the response type for the Query/Balance RPC method -message QueryBalanceResponse{ - uint64 amount = 1; -} - -// QueryOwnerRequest is the request type for the Query/Owner RPC method -message QueryOwnerRequest { - string class_id = 1; - string id = 2; -} - -// QueryOwnerResponse is the response type for the Query/Owner RPC method -message QueryOwnerResponse{ - string owner = 1; -} - -// QuerySupplyRequest is the request type for the Query/Supply RPC method -message QuerySupplyRequest { - string class_id = 1; -} - -// QuerySupplyResponse is the response type for the Query/Supply RPC method -message QuerySupplyResponse { - uint64 amount = 1; -} - -// QueryNFTsOfClassByOwnerRequest is the request type for the Query/NFTsOfClassByOwner RPC method -message QueryNFTsOfClassByOwnerRequest { - string class_id = 1; - string owner = 2; - cosmos.base.query.v1beta1.PageResponse pagination = 3; -} - -// QueryNFTsOfClassRequest is the request type for the Query/NFTsOfClass RPC method -message QueryNFTsOfClassRequest { - string class_id = 1; - cosmos.base.query.v1beta1.PageResponse pagination = 2; -} - -// QueryNFTsResponse is the response type for the Query/NFTsOfClass and Query/NFTsOfClassByOwner RPC methods -message QueryNFTsResponse { - repeated cosmos.nft.v1beta1.NFT nfts = 1; - cosmos.base.query.v1beta1.PageResponse pagination = 2; -} - -// QueryNFTRequest is the request type for the Query/NFT RPC method -message QueryNFTRequest { - string class_id = 1; - string id = 2; -} - -// QueryNFTResponse is the response type for the Query/NFT RPC method -message QueryNFTResponse { - cosmos.nft.v1beta1.NFT nft = 1; -} - -// QueryClassRequest is the request type for the Query/Class RPC method -message QueryClassRequest { - string class_id = 1; -} - -// QueryClassResponse is the response type for the Query/Class RPC method -message QueryClassResponse { - cosmos.nft.v1beta1.Class class = 1; -} - -// QueryClassesRequest is the request type for the Query/Classes RPC method -message QueryClassesRequest { - // pagination defines an optional pagination for the request. - cosmos.base.query.v1beta1.PageRequest pagination = 1; -} - -// QueryClassesResponse is the response type for the Query/Classes RPC method -message QueryClassesResponse { - repeated cosmos.nft.v1beta1.Class classes = 1; - cosmos.base.query.v1beta1.PageResponse pagination = 2; -} -``` - -### Interoperability - -Interoperability is all about reusing assets between modules and chains. The former one is achieved by ADR-33: Protobuf client - server communication. At the time of writing ADR-33 is not finalized. The latter is achieved by IBC. Here we will focus on the IBC side. -IBC is implemented per module. Here, we aligned that NFTs will be recorded and managed in the x/nft. This requires creation of a new IBC standard and implementation of it. - -For IBC interoperability, NFT custom modules MUST use the NFT object type understood by the IBC client. So, for x/nft interoperability, custom NFT implementations (example: x/cryptokitty) should use the canonical x/nft module and proxy all NFT balance keeping functionality to x/nft or else re-implement all functionality using the NFT object type understood by the IBC client. In other words: x/nft becomes the standard NFT registry for all Cosmos NFTs (example: x/cryptokitty will register a kitty NFT in x/nft and use x/nft for book keeping). This was [discussed](https://github.com/cosmos/cosmos-sdk/discussions/9065#discussioncomment-873206) in the context of using x/bank as a general asset balance book. Not using x/nft will require implementing another module for IBC. - -## Consequences - -### Backward Compatibility - -No backward incompatibilities. - -### Forward Compatibility - -This specification conforms to the ERC-721 smart contract specification for NFT identifiers. Note that ERC-721 defines uniqueness based on (contract address, uint256 tokenId), and we conform to this implicitly because a single module is currently aimed to track NFT identifiers. Note: use of the (mutable) data field to determine uniqueness is not safe.s - -### Positive - -- NFT identifiers available on Cosmos Hub. -- Ability to build different NFT modules for the Cosmos Hub, e.g., ERC-721. -- NFT module which supports interoperability with IBC and other cross-chain infrastructures like Gravity Bridge - -### Negative - -+ New IBC app is required for x/nft - -### Neutral - -- Other functions need more modules. For example, a custody module is needed for NFT trading function, a collectible module is needed for defining NFT properties. - -## Further Discussions - -For other kinds of applications on the Hub, more app-specific modules can be developed in the future: - -- `x/nft/custody`: custody of NFTs to support trading functionality. -- `x/nft/marketplace`: selling and buying NFTs using sdk.Coins. - -Other networks in the Cosmos ecosystem could design and implement their own NFT modules for specific NFT applications and use cases. - -## References - -- Initial discussion: https://github.com/cosmos/cosmos-sdk/discussions/9065 -- x/nft: initialize module: https://github.com/cosmos/cosmos-sdk/pull/9174 -- [ADR 033](https://github.com/cosmos/cosmos-sdk/blob/master/docs/architecture/adr-033-protobuf-inter-module-comm.md) diff --git a/docs/architecture/adr-044-protobuf-updates-guidelines.md b/docs/architecture/adr-044-protobuf-updates-guidelines.md deleted file mode 100644 index a76a7579ba07..000000000000 --- a/docs/architecture/adr-044-protobuf-updates-guidelines.md +++ /dev/null @@ -1,109 +0,0 @@ -# ADR 044: Guidelines for Updating Protobuf Definitions - -## Changelog - -- 28.06.2021: Initial Draft - -## Status - -Draft - -## Abstract - -This ADR provides guidelines and recommended practices when updating Protobuf definitions. These guidelines are targeting module developers. - -## Context - -The Cosmos SDK maintains a set of [Protobuf definitions](https://github.com/cosmos/cosmos-sdk/tree/master/proto/cosmos). It is important to correctly design Protobuf definitions to avoid any breaking changes within the same version. The reasons are to not break tooling (including indexers and explorers), wallets and other third-party integrations. - -When making changes to these Protobuf definitions, the Cosmos SDK currently only follows [Buf's](https://docs.buf.build/) recommendations. We noticed however that Buf's recommendations might still result in breaking changes in the SDK in some cases. For example: - -- Adding fields to `Msg`s. Adding fields is a not a Protobuf spec-breaking operation. However, when adding new fields to `Msg`s, the unknown field rejection will throw an error when sending the new `Msg` to an older node. -- Marking fields as `reserved`. Protobuf proposes the `reserved` keyword for removing fields without the need to bump the package version. However, by doing so, client backwards compatibility is broken as Protobuf doesn't generate anything for `reserved` fields. See [#9446](https://github.com/cosmos/cosmos-sdk/issues/9446) for more details on this issue. - -Moreover, module developers often face other questions around Protobuf definitions such as "Can I rename a field?" or "Can I deprecate a field?" This ADR aims to answer all these questions by providing clear guidelines about allowed updates for Protobuf definitions. - -## Decision - -We decide to keep [Buf's](https://docs.buf.build/) recommendations with the following exceptions: - -- `UNARY_RPC`: the Cosmos SDK currently does not support streaming RPCs. -- `COMMENT_FIELD`: the Cosmos SDK allows fields with no comments. -- `SERVICE_SUFFIX`: we use the `Query` and `Msg` service naming convention, which doesn't use the `-Service` suffix. -- `PACKAGE_VERSION_SUFFIX`: some packages, such as `cosmos.crypto.ed25519`, don't use a version suffix. -- `RPC_REQUEST_STANDARD_NAME`: Requests for the `Msg` service don't have the `-Request` suffix to keep backwards compatibility. - -On top of Buf's recommendations we add the following guidelines that are specific to the Cosmos SDK. - -### Updating Protobuf Definition Without Bumping Version - -#### 1. `Msg`s MUST NOT have new fields - -When processing `Msg`s, the Cosmos SDK's antehandlers are strict and don't allow unknown fields in `Msg`s. This is checked by the unknown field rejection in the [`codec/unknownproto` package](https://github.com/cosmos/cosmos-sdk/blob/master/codec/unknownproto). - -Now imagine a v0.43 node accepting a `MsgExample` transaction, and in v0.44 the chain developer decides to add a field to `MsgExample`. A client developer, which only manipulates Protobuf definitions, would see that `MsgExample` has a new field, and will populate it. However, sending the new `MsgExample` to an old v0.43 node would cause the v0.43 node to reject the `MsgExample` because of the unknown field. The expectation that the same Protobuf version can be used across multiple node versions MUST be guaranteed. - -For this reason, module developers MUST NOT add new fields to existing `Msg`s. - -It is worth mentioning that this does not limit adding fields to a `Msg`, but also to all nested structs and `Any`s inside a `Msg`. - -#### 2. Non-`Msg`-related Protobuf definitions MAY have new fields - -On the other hand, module developers MAY add new fields to Protobuf definitions related to the `Query` service or to objects which are saved in the store. This recommendation follows the Protobuf specification, but is added in this document for clarity. - -#### 3. Fields MAY be marked as `deprecated`, and nodes MAY implement a protocol-breaking change for handling these fields - -Protobuf supports the [`deprecated` field option](https://developers.google.com/protocol-buffers/docs/proto#options), and this option MAY be used on any field, including `Msg` fields. If a node handles a Protobuf message with a non-empty deprecated field, the node MAY change its behavior upon processing it, even in a protocol-breaking way. When possible, the node MUST handle backwards compatibility without breaking the consensus (unless we increment the proto version). - -As an example, the Cosmos SDK v0.42 to v0.43 update contained two Protobuf-breaking changes, listed below. Instead of bumping the package versions from `v1beta1` to `v1`, the SDK team decided to follow this guideline, by reverting the breaking changes, marking those changes as deprecated, and modifying the node implementation when processing messages with deprecated fields. More specifically: - -- The Cosmos SDK recently removed support for [time-based software upgrades](https://github.com/cosmos/cosmos-sdk/pull/8849). As such, the `time` field has been marked as deprecated in `cosmos.upgrade.v1beta1.Plan`. Moreover, the node will reject any proposal containing an upgrade Plan whose `time` field is non-empty. -- The Cosmos SDK now supports [governance split votes](./adr-037-gov-split-vote.md). When querying for votes, the returned `cosmos.gov.v1beta1.Vote` message has its `option` field (used for 1 vote option) deprecated in favor of its `options` field (allowing multiple vote options). Whenever possible, the SDK still populates the deprecated `option` field, that is, if and only if the `len(options) == 1` and `options[0].Weight == 1.0`. - -#### 4. Fields MUST NOT be renamed - -Whereas the official Protobuf recommendations do not prohibit renaming fields, as it does not break the Protobuf binary representation, the SDK explicitly forbids renaming fields in Protobuf structs. The main reason for this choice is to avoid introducing breaking changes for clients, which often rely on hard-coded fields from generated types. Moreover, renaming fields will lead to client-breaking JSON representations of Protobuf definitions, used in REST endpoints and in the CLI. - -### Incrementing Protobuf Package Version - -TODO, needs architecture review. Some topics: - -- Bumping versions frequency -- When bumping versions, should the Cosmos SDK support both versions? - - i.e. v1beta1 -> v1, should we have two folders in the Cosmos SDK, and handlers for both versions? -- mention ADR-023 Protobuf naming - -## Consequences - -> This section describes the resulting context, after applying the decision. All consequences should be listed here, not just the "positive" ones. A particular decision may have positive, negative, and neutral consequences, but all of them affect the team and project in the future. - -### Backwards Compatibility - -> All ADRs that introduce backwards incompatibilities must include a section describing these incompatibilities and their severity. The ADR must explain how the author proposes to deal with these incompatibilities. ADR submissions without a sufficient backwards compatibility treatise may be rejected outright. - -### Positive - -- less pain to tool developers -- more compatibility in the ecosystem -- ... - -### Negative - -{negative consequences} - -### Neutral - -- more rigor in Protobuf review - -## Further Discussions - -This ADR is still in the DRAFT stage, and the "Incrementing Protobuf Package Version" will be filled in once we make a decision on how to correctly do it. - -## Test Cases [optional] - -Test cases for an implementation are mandatory for ADRs that are affecting consensus changes. Other ADRs can choose to include links to test cases if applicable. - -## References - -- [#9445](https://github.com/cosmos/cosmos-sdk/issues/9445) Release proto definitions v1 -- [#9446](https://github.com/cosmos/cosmos-sdk/issues/9446) Address v1beta1 proto breaking changes diff --git a/docs/architecture/adr-046-module-params.md b/docs/architecture/adr-046-module-params.md deleted file mode 100644 index 520c79884e82..000000000000 --- a/docs/architecture/adr-046-module-params.md +++ /dev/null @@ -1,184 +0,0 @@ -# ADR 046: Module Params - -## Changelog - -- Sep 22, 2021: Initial Draft - -## Status - -Proposed - -## Abstract - -This ADR describes an alternative approach to how Cosmos SDK modules use, interact, -and store their respective parameters. - -## Context - -Currently, in the Cosmos SDK, modules that require the use of parameters use the -`x/params` module. The `x/params` works by having modules define parameters, -typically via a simple `Params` structure, and registering that structure in -the `x/params` module via a unique `Subspace` that belongs to the respective -registering module. The registering module then has unique access to its respective -`Subspace`. Through this `Subspace`, the module can get and set its `Params` -structure. - -In addition, the Cosmos SDK's `x/gov` module has direct support for changing -parameters on-chain via a `ParamChangeProposal` governance proposal type, where -stakeholders can vote on suggested parameter changes. - -There are various tradeoffs to using the `x/params` module to manage individual -module parameters. Namely, managing parameters essentially comes for "free" in -that developers only need to define the `Params` struct, the `Subspace`, and the -various auxiliary functions, e.g. `ParamSetPairs`, on the `Params` type. However, -there are some notable drawbacks. These drawbacks include the fact that parameters -are serialized in state via JSON which is extremely slow. In addition, parameter -changes via `ParamChangeProposal` governance proposals have no way of reading from -or writing to state. In other words, it is currently not possible to have any -state transitions in the application during an attempt to change param(s). - -## Decision - -We will build off of the alignment of `x/gov` and `x/authz` work per -[#9810](https://github.com/cosmos/cosmos-sdk/pull/9810). Namely, module developers -will create one or more unique parameter data structures that must be serialized -to state. The Param data structures must implement `sdk.Msg` interface with respective -Protobuf Msg service method which will validate and update the parameters with all -necessary changes. The `x/gov` module via the work done in -[#9810](https://github.com/cosmos/cosmos-sdk/pull/9810), will dispatch Param -messages, which will be handled by Protobuf Msg services. - -Note, it is up to developers to decide how to structure their parameters and -the respective `sdk.Msg` messages. Consider the parameters currently defined in -`x/auth` using the `x/params` module for parameter management: - -```protobuf -message Params { - uint64 max_memo_characters = 1; - uint64 tx_sig_limit = 2; - uint64 tx_size_cost_per_byte = 3; - uint64 sig_verify_cost_ed25519 = 4; - uint64 sig_verify_cost_secp256k1 = 5; -} -``` - -Developers can choose to either create a unique data structure for every field in -`Params` or they can create a single `Params` structure as outlined above in the -case of `x/auth`. - -In the former, `x/params`, approach, a `sdk.Msg` would need to be created for every single -field along with a handler. This can become burdensome if there are a lot of -parameter fields. In the latter case, there is only a single data structure and -thus only a single message handler, however, the message handler might have to be -more sophisticated in that it might need to understand what parameters are being -changed vs what parameters are untouched. - -Params change proposals are made using the `x/gov` module. Execution is done through -`x/authz` authorization to the root `x/gov` module's account. - -Continuing to use `x/auth`, we demonstrate a more complete example: - -```go -type Params struct { - MaxMemoCharacters uint64 - TxSigLimit uint64 - TxSizeCostPerByte uint64 - SigVerifyCostED25519 uint64 - SigVerifyCostSecp256k1 uint64 -} - -type MsgUpdateParams struct { - MaxMemoCharacters uint64 - TxSigLimit uint64 - TxSizeCostPerByte uint64 - SigVerifyCostED25519 uint64 - SigVerifyCostSecp256k1 uint64 -} - -type MsgUpdateParamsResponse struct {} - -func (ms msgServer) UpdateParams(goCtx context.Context, msg *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { - ctx := sdk.UnwrapSDKContext(goCtx) - - // verification logic... - - // persist params - params := ParamsFromMsg(msg) - ms.SaveParams(ctx, params) - - return &types.MsgUpdateParamsResponse{}, nil -} - -func ParamsFromMsg(msg *types.MsgUpdateParams) Params { - // ... -} -``` - -A gRPC `Service` query should also be provided, for example: - -```protobuf -service Query { - // ... - - rpc Params(QueryParamsRequest) returns (QueryParamsResponse) { - option (google.api.http).get = "/cosmos//v1beta1/params"; - } -} - -message QueryParamsResponse { - Params params = 1 [(gogoproto.nullable) = false]; -} -``` - -## Consequences - -As a result of implementing the module parameter methodology, we gain the ability -for module parameter changes to be stateful and extensible to fit nearly every -application's use case. We will be able to emit events (and trigger hooks registered -to that events using the work proposed in [even hooks](https://github.com/cosmos/cosmos-sdk/discussions/9656)), -call other Msg service methods or perform migration. -In addition, there will be significant gains in performance when it comes to reading -and writing parameters from and to state, especially if a specific set of parameters -are read on a consistent basis. - -However, this methodology will require developers to implement more types and -Msg service metohds which can become burdensome if many parameters exist. In addition, -developers are required to implement persistance logics of module parameters. -However, this should be trivial. - -### Backwards Compatibility - -The new method for working with module parameters is naturally not backwards -compatible with the existing `x/params` module. However, the `x/params` will -remain in the Cosmos SDK and will be marked as deprecated with no additional -functionality being added apart from potential bug fixes. Note, the `x/params` -module may be removed entirely in a future release. - -### Positive - -- Module parameters are serialized more efficiently -- Modules are able to react on parameters changes and perform additional actions. -- Special events can be emitted, allowing hooks to be triggered. - -### Negative - -- Module parameters becomes slightly more burdensome for module developers: - - Modules are now responsible for persisting and retrieving parameter state - - Modules are now required to have unique message handlers to handle parameter - changes per unique parameter data structure. - -### Neutral - -- Requires [#9810](https://github.com/cosmos/cosmos-sdk/pull/9810) to be reviewed - and merged. - - - -## References - -- https://github.com/cosmos/cosmos-sdk/pull/9810 -- https://github.com/cosmos/cosmos-sdk/issues/9438 -- https://github.com/cosmos/cosmos-sdk/discussions/9913 diff --git a/store/streaming/README.md b/store/streaming/README.md deleted file mode 100644 index 46e343416a52..000000000000 --- a/store/streaming/README.md +++ /dev/null @@ -1,67 +0,0 @@ -# State Streaming Service - -This package contains the constructors for the `StreamingService`s used to write state changes out from individual KVStores to a -file or stream, as described in [ADR-038](../../docs/architecture/adr-038-state-listening.md) and defined in [types/streaming.go](../../baseapp/streaming.go). -The child directories contain the implementations for specific output destinations. - -Currently, a `StreamingService` implementation that writes state changes out to files is supported, in the future support for additional -output destinations can be added. - -The `StreamingService` is configured from within an App using the `AppOptions` loaded from the app.toml file: - -```toml -[store] - streamers = [ # if len(streamers) > 0 we are streaming - "file", # name of the streaming service, used by constructor - ] - -[streamers] - [streamers.file] - keys = ["list", "of", "store", "keys", "we", "want", "to", "expose", "for", "this", "streaming", "service"] - write_dir = "path to the write directory" - prefix = "optional prefix to prepend to the generated file names" -``` - -`store.streamers` contains a list of the names of the `StreamingService` implementations to employ which are used by `ServiceTypeFromString` -to return the `ServiceConstructor` for that particular implementation: - -```go -listeners := cast.ToStringSlice(appOpts.Get("store.streamers")) -for _, listenerName := range listeners { - constructor, err := ServiceTypeFromString(listenerName) - if err != nil { - // handle error - } -} -``` - -`streamers` contains a mapping of the specific `StreamingService` implementation name to the configuration parameters for that specific service. -`streamers.x.keys` contains the list of `StoreKey` names for the KVStores to expose using this service and is required by every type of `StreamingService`. -In order to expose *all* KVStores, we can include `*` in this list. An empty list is equivalent to turning the service off. - -Additional configuration parameters are optional and specific to the implementation. -In the case of the file streaming service, `streamers.file.write_dir` contains the path to the -directory to write the files to, and `streamers.file.prefix` contains an optional prefix to prepend to the output files to prevent potential collisions -with other App `StreamingService` output files. - -The `ServiceConstructor` accepts `AppOptions`, the store keys collected using `streamers.x.keys`, a `BinaryMarshaller` and -returns a `StreamingService` implementation. The `AppOptions` are passed in to provide access to any implementation specific configuration options, -e.g. in the case of the file streaming service the `streamers.file.write_dir` and `streamers.file.prefix`. - -```go -streamingService, err := constructor(appOpts, exposeStoreKeys, appCodec) -if err != nil { - // handler error -} -``` - -The returned `StreamingService` is loaded into the BaseApp using the BaseApp's `SetStreamingService` method. -The `Stream` method is called on the service to begin the streaming process. Depending on the implementation this process -may be synchronous or asynchronous with the message processing of the state machine. - -```go -bApp.SetStreamingService(streamingService) -wg := new(sync.WaitGroup) -quitChan := make(chan struct{}) -streamingService.Stream(wg, quitChan) -``` diff --git a/store/streaming/file/README.md b/store/streaming/file/README.md deleted file mode 100644 index 3e4a248e1a95..000000000000 --- a/store/streaming/file/README.md +++ /dev/null @@ -1,66 +0,0 @@ -# File Streaming Service - -This pkg contains an implementation of the [StreamingService](../../../baseapp/streaming.go) that writes -the data stream out to files on the local filesystem. This process is performed synchronously with the message processing -of the state machine. - -## Configuration - -The `file.StreamingService` is configured from within an App using the `AppOptions` loaded from the app.toml file: - -```toml -[store] - streamers = [ # if len(streamers) > 0 we are streaming - "file", # name of the streaming service, used by constructor - ] - -[streamers] - [streamers.file] - keys = ["list", "of", "store", "keys", "we", "want", "to", "expose", "for", "this", "streaming", "service"] - write_dir = "path to the write directory" - prefix = "optional prefix to prepend to the generated file names" -``` - -We turn the service on by adding its name, "file", to `store.streamers`- the list of streaming services for this App to employ. - -In `streamers.file` we include three configuration parameters for the file streaming service: - -1. `streamers.x.keys` contains the list of `StoreKey` names for the KVStores to expose using this service. -In order to expose *all* KVStores, we can include `*` in this list. An empty list is equivalent to turning the service off. -2. `streamers.file.write_dir` contains the path to the directory to write the files to. -3. `streamers.file.prefix` contains an optional prefix to prepend to the output files to prevent potential collisions -with other App `StreamingService` output files. - -##### Encoding - -For each pair of `BeginBlock` requests and responses, a file is created and named `block-{N}-begin`, where N is the block number. -At the head of this file the length-prefixed protobuf encoded `BeginBlock` request is written. -At the tail of this file the length-prefixed protobuf encoded `BeginBlock` response is written. -In between these two encoded messages, the state changes that occurred due to the `BeginBlock` request are written chronologically as -a series of length-prefixed protobuf encoded `StoreKVPair`s representing `Set` and `Delete` operations within the KVStores the service -is configured to listen to. - -For each pair of `DeliverTx` requests and responses, a file is created and named `block-{N}-tx-{M}` where N is the block number and M -is the tx number in the block (i.e. 0, 1, 2...). -At the head of this file the length-prefixed protobuf encoded `DeliverTx` request is written. -At the tail of this file the length-prefixed protobuf encoded `DeliverTx` response is written. -In between these two encoded messages, the state changes that occurred due to the `DeliverTx` request are written chronologically as -a series of length-prefixed protobuf encoded `StoreKVPair`s representing `Set` and `Delete` operations within the KVStores the service -is configured to listen to. - -For each pair of `EndBlock` requests and responses, a file is created and named `block-{N}-end`, where N is the block number. -At the head of this file the length-prefixed protobuf encoded `EndBlock` request is written. -At the tail of this file the length-prefixed protobuf encoded `EndBlock` response is written. -In between these two encoded messages, the state changes that occurred due to the `EndBlock` request are written chronologically as -a series of length-prefixed protobuf encoded `StoreKVPair`s representing `Set` and `Delete` operations within the KVStores the service -is configured to listen to. - -##### Decoding - -To decode the files written in the above format we read all the bytes from a given file into memory and segment them into proto -messages based on the length-prefixing of each message. Once segmented, it is known that the first message is the ABCI request, -the last message is the ABCI response, and that every message in between is a `StoreKVPair`. This enables us to decode each segment into -the appropriate message type. - -The type of ABCI req/res, the block height, and the transaction index (where relevant) is known -from the file name, and the KVStore each `StoreKVPair` originates from is known since the `StoreKey` is included as a field in the proto message. diff --git a/store/v2/flat/store.go b/store/v2/flat/store.go deleted file mode 100644 index 9076ec15f4c7..000000000000 --- a/store/v2/flat/store.go +++ /dev/null @@ -1,479 +0,0 @@ -package flat - -import ( - "crypto/sha256" - "errors" - "fmt" - "io" - "math" - "sync" - - dbm "github.com/cosmos/cosmos-sdk/db" - "github.com/cosmos/cosmos-sdk/db/prefix" - abci "github.com/tendermint/tendermint/abci/types" - - util "github.com/cosmos/cosmos-sdk/internal" - "github.com/cosmos/cosmos-sdk/store/cachekv" - "github.com/cosmos/cosmos-sdk/store/listenkv" - "github.com/cosmos/cosmos-sdk/store/tracekv" - "github.com/cosmos/cosmos-sdk/store/types" - "github.com/cosmos/cosmos-sdk/store/v2/smt" - sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" - "github.com/cosmos/cosmos-sdk/types/kv" -) - -var ( - _ types.KVStore = (*Store)(nil) - _ types.CommitKVStore = (*Store)(nil) - _ types.Queryable = (*Store)(nil) -) - -var ( - merkleRootKey = []byte{0} // Key for root hash of Merkle tree - dataPrefix = []byte{1} // Prefix for state mappings - indexPrefix = []byte{2} // Prefix for Store reverse index - merkleNodePrefix = []byte{3} // Prefix for Merkle tree nodes - merkleValuePrefix = []byte{4} // Prefix for Merkle value mappings -) - -var ( - ErrVersionDoesNotExist = errors.New("version does not exist") - ErrMaximumHeight = errors.New("maximum block height reached") -) - -type StoreConfig struct { - // Version pruning options for backing DBs. - Pruning types.PruningOptions - // The backing DB to use for the state commitment Merkle tree data. - // If nil, Merkle data is stored in the state storage DB under a separate prefix. - MerkleDB dbm.DBConnection - InitialVersion uint64 -} - -// Store is a CommitKVStore which handles state storage and commitments as separate concerns, -// optionally using separate backing key-value DBs for each. -// Allows synchronized R/W access by locking. -type Store struct { - stateDB dbm.DBConnection - stateTxn dbm.DBReadWriter - dataTxn dbm.DBReadWriter - merkleTxn dbm.DBReadWriter - indexTxn dbm.DBReadWriter - // State commitment (SC) KV store for current version - merkleStore *smt.Store - - opts StoreConfig - mtx sync.RWMutex -} - -var DefaultStoreConfig = StoreConfig{Pruning: types.PruneDefault, MerkleDB: nil} - -// NewStore creates a new Store, or loads one if db contains existing data. -func NewStore(db dbm.DBConnection, opts StoreConfig) (ret *Store, err error) { - versions, err := db.Versions() - if err != nil { - return - } - loadExisting := false - // If the DB is not empty, attempt to load existing data - if saved := versions.Count(); saved != 0 { - if opts.InitialVersion != 0 && versions.Last() < opts.InitialVersion { - return nil, fmt.Errorf("latest saved version is less than initial version: %v < %v", - versions.Last(), opts.InitialVersion) - } - loadExisting = true - } - err = db.Revert() - if err != nil { - return - } - stateTxn := db.ReadWriter() - defer func() { - if err != nil { - err = util.CombineErrors(err, stateTxn.Discard(), "stateTxn.Discard also failed") - } - }() - merkleTxn := stateTxn - if opts.MerkleDB != nil { - var mversions dbm.VersionSet - mversions, err = opts.MerkleDB.Versions() - if err != nil { - return - } - // Version sets of each DB must match - if !versions.Equal(mversions) { - err = fmt.Errorf("Storage and Merkle DB have different version history") //nolint:stylecheck - return - } - err = opts.MerkleDB.Revert() - if err != nil { - return - } - merkleTxn = opts.MerkleDB.ReadWriter() - } - - var merkleStore *smt.Store - if loadExisting { - var root []byte - root, err = stateTxn.Get(merkleRootKey) - if err != nil { - return - } - if root == nil { - err = fmt.Errorf("could not get root of SMT") - return - } - merkleStore = loadSMT(merkleTxn, root) - } else { - merkleNodes := prefix.NewPrefixReadWriter(merkleTxn, merkleNodePrefix) - merkleValues := prefix.NewPrefixReadWriter(merkleTxn, merkleValuePrefix) - merkleStore = smt.NewStore(merkleNodes, merkleValues) - } - return &Store{ - stateDB: db, - stateTxn: stateTxn, - dataTxn: prefix.NewPrefixReadWriter(stateTxn, dataPrefix), - indexTxn: prefix.NewPrefixReadWriter(stateTxn, indexPrefix), - merkleTxn: merkleTxn, - merkleStore: merkleStore, - opts: opts, - }, nil -} - -func (s *Store) Close() error { - err := s.stateTxn.Discard() - if s.opts.MerkleDB != nil { - err = util.CombineErrors(err, s.merkleTxn.Discard(), "merkleTxn.Discard also failed") - } - return err -} - -// Get implements KVStore. -func (s *Store) Get(key []byte) []byte { - s.mtx.RLock() - defer s.mtx.RUnlock() - - val, err := s.dataTxn.Get(key) - if err != nil { - panic(err) - } - return val -} - -// Has implements KVStore. -func (s *Store) Has(key []byte) bool { - s.mtx.RLock() - defer s.mtx.RUnlock() - - has, err := s.dataTxn.Has(key) - if err != nil { - panic(err) - } - return has -} - -// Set implements KVStore. -func (s *Store) Set(key, value []byte) { - s.mtx.Lock() - defer s.mtx.Unlock() - - err := s.dataTxn.Set(key, value) - if err != nil { - panic(err) - } - s.merkleStore.Set(key, value) - khash := sha256.Sum256(key) - err = s.indexTxn.Set(khash[:], key) - if err != nil { - panic(err) - } -} - -// Delete implements KVStore. -func (s *Store) Delete(key []byte) { - khash := sha256.Sum256(key) - s.mtx.Lock() - defer s.mtx.Unlock() - - s.merkleStore.Delete(key) - _ = s.indexTxn.Delete(khash[:]) - _ = s.dataTxn.Delete(key) -} - -type contentsIterator struct { - dbm.Iterator - valid bool -} - -func newIterator(source dbm.Iterator) *contentsIterator { - ret := &contentsIterator{Iterator: source} - ret.Next() - return ret -} - -func (it *contentsIterator) Next() { it.valid = it.Iterator.Next() } -func (it *contentsIterator) Valid() bool { return it.valid } - -// Iterator implements KVStore. -func (s *Store) Iterator(start, end []byte) types.Iterator { - iter, err := s.dataTxn.Iterator(start, end) - if err != nil { - panic(err) - } - return newIterator(iter) -} - -// ReverseIterator implements KVStore. -func (s *Store) ReverseIterator(start, end []byte) types.Iterator { - iter, err := s.dataTxn.ReverseIterator(start, end) - if err != nil { - panic(err) - } - return newIterator(iter) -} - -// GetStoreType implements Store. -func (s *Store) GetStoreType() types.StoreType { - return types.StoreTypeDecoupled -} - -// Commit implements Committer. -func (s *Store) Commit() types.CommitID { - versions, err := s.stateDB.Versions() - if err != nil { - panic(err) - } - target := versions.Last() + 1 - if target > math.MaxInt64 { - panic(ErrMaximumHeight) - } - // Fast forward to initialversion if needed - if s.opts.InitialVersion != 0 && target < s.opts.InitialVersion { - target = s.opts.InitialVersion - } - cid, err := s.commit(target) - if err != nil { - panic(err) - } - - previous := cid.Version - 1 - if s.opts.Pruning.KeepEvery != 1 && s.opts.Pruning.Interval != 0 && cid.Version%int64(s.opts.Pruning.Interval) == 0 { - // The range of newly prunable versions - lastPrunable := previous - int64(s.opts.Pruning.KeepRecent) - firstPrunable := lastPrunable - int64(s.opts.Pruning.Interval) - for version := firstPrunable; version <= lastPrunable; version++ { - if s.opts.Pruning.KeepEvery == 0 || version%int64(s.opts.Pruning.KeepEvery) != 0 { - s.stateDB.DeleteVersion(uint64(version)) - if s.opts.MerkleDB != nil { - s.opts.MerkleDB.DeleteVersion(uint64(version)) - } - } - } - } - return *cid -} - -func (s *Store) commit(target uint64) (id *types.CommitID, err error) { - root := s.merkleStore.Root() - err = s.stateTxn.Set(merkleRootKey, root) - if err != nil { - return - } - err = s.stateTxn.Commit() - if err != nil { - return - } - defer func() { - if err != nil { - err = util.CombineErrors(err, s.stateDB.Revert(), "stateDB.Revert also failed") - } - }() - err = s.stateDB.SaveVersion(target) - if err != nil { - return - } - - stateTxn := s.stateDB.ReadWriter() - defer func() { - if err != nil { - err = util.CombineErrors(err, stateTxn.Discard(), "stateTxn.Discard also failed") - } - }() - merkleTxn := stateTxn - - // If DBs are not separate, Merkle state has been commmitted & snapshotted - if s.opts.MerkleDB != nil { - defer func() { - if err != nil { - if delerr := s.stateDB.DeleteVersion(target); delerr != nil { - err = fmt.Errorf("%w: commit rollback failed: %v", err, delerr) - } - } - }() - - err = s.merkleTxn.Commit() - if err != nil { - return - } - defer func() { - if err != nil { - err = util.CombineErrors(err, s.opts.MerkleDB.Revert(), "merkleDB.Revert also failed") - } - }() - - err = s.opts.MerkleDB.SaveVersion(target) - if err != nil { - return - } - merkleTxn = s.opts.MerkleDB.ReadWriter() - } - - s.stateTxn = stateTxn - s.dataTxn = prefix.NewPrefixReadWriter(stateTxn, dataPrefix) - s.indexTxn = prefix.NewPrefixReadWriter(stateTxn, indexPrefix) - s.merkleTxn = merkleTxn - s.merkleStore = loadSMT(merkleTxn, root) - - return &types.CommitID{Version: int64(target), Hash: root}, nil -} - -// LastCommitID implements Committer. -func (s *Store) LastCommitID() types.CommitID { - versions, err := s.stateDB.Versions() - if err != nil { - panic(err) - } - last := versions.Last() - if last == 0 { - return types.CommitID{} - } - // Latest Merkle root is the one currently stored - hash, err := s.stateTxn.Get(merkleRootKey) - if err != nil { - panic(err) - } - return types.CommitID{Version: int64(last), Hash: hash} -} - -func (s *Store) GetPruning() types.PruningOptions { return s.opts.Pruning } -func (s *Store) SetPruning(po types.PruningOptions) { s.opts.Pruning = po } - -// Query implements ABCI interface, allows queries. -// -// by default we will return from (latest height -1), -// as we will have merkle proofs immediately (header height = data height + 1) -// If latest-1 is not present, use latest (which must be present) -// if you care to have the latest data to see a tx results, you must -// explicitly set the height you want to see -func (s *Store) Query(req abci.RequestQuery) (res abci.ResponseQuery) { - if len(req.Data) == 0 { - return sdkerrors.QueryResult(sdkerrors.Wrap(sdkerrors.ErrTxDecode, "query cannot be zero length"), false) - } - - // if height is 0, use the latest height - height := req.Height - if height == 0 { - versions, err := s.stateDB.Versions() - if err != nil { - return sdkerrors.QueryResult(errors.New("failed to get version info"), false) - } - latest := versions.Last() - if versions.Exists(latest - 1) { - height = int64(latest - 1) - } else { - height = int64(latest) - } - } - if height < 0 { - return sdkerrors.QueryResult(fmt.Errorf("height overflow: %v", height), false) - } - res.Height = height - - switch req.Path { - case "/key": - var err error - res.Key = req.Data // data holds the key bytes - - dbr, err := s.stateDB.ReaderAt(uint64(height)) - if err != nil { - if errors.Is(err, dbm.ErrVersionDoesNotExist) { - err = sdkerrors.ErrInvalidHeight - } - return sdkerrors.QueryResult(err, false) - } - defer dbr.Discard() - contents := prefix.NewPrefixReader(dbr, dataPrefix) - res.Value, err = contents.Get(res.Key) - if err != nil { - return sdkerrors.QueryResult(err, false) - } - if !req.Prove { - break - } - merkleView := dbr - if s.opts.MerkleDB != nil { - merkleView, err = s.opts.MerkleDB.ReaderAt(uint64(height)) - if err != nil { - return sdkerrors.QueryResult( - fmt.Errorf("version exists in state DB but not Merkle DB: %v", height), false) - } - defer merkleView.Discard() - } - root, err := dbr.Get(merkleRootKey) - if err != nil { - return sdkerrors.QueryResult(err, false) - } - if root == nil { - return sdkerrors.QueryResult(errors.New("Merkle root hash not found"), false) //nolint:stylecheck - } - merkleStore := loadSMT(dbm.ReaderAsReadWriter(merkleView), root) - res.ProofOps, err = merkleStore.GetProof(res.Key) - if err != nil { - return sdkerrors.QueryResult(fmt.Errorf("Merkle proof creation failed for key: %v", res.Key), false) //nolint:stylecheck - } - - case "/subspace": - pairs := kv.Pairs{ - Pairs: make([]kv.Pair, 0), - } - - subspace := req.Data - res.Key = subspace - - iterator := s.Iterator(subspace, types.PrefixEndBytes(subspace)) - for ; iterator.Valid(); iterator.Next() { - pairs.Pairs = append(pairs.Pairs, kv.Pair{Key: iterator.Key(), Value: iterator.Value()}) - } - iterator.Close() - - bz, err := pairs.Marshal() - if err != nil { - panic(fmt.Errorf("failed to marshal KV pairs: %w", err)) - } - - res.Value = bz - - default: - return sdkerrors.QueryResult(sdkerrors.Wrapf(sdkerrors.ErrUnknownRequest, "unexpected query path: %v", req.Path), false) - } - - return res -} - -func loadSMT(merkleTxn dbm.DBReadWriter, root []byte) *smt.Store { - merkleNodes := prefix.NewPrefixReadWriter(merkleTxn, merkleNodePrefix) - merkleValues := prefix.NewPrefixReadWriter(merkleTxn, merkleValuePrefix) - return smt.LoadStore(merkleNodes, merkleValues, root) -} - -func (s *Store) CacheWrap() types.CacheWrap { - return cachekv.NewStore(s) -} - -func (s *Store) CacheWrapWithTrace(w io.Writer, tc types.TraceContext) types.CacheWrap { - return cachekv.NewStore(tracekv.NewStore(s, w, tc)) -} - -func (s *Store) CacheWrapWithListeners(storeKey types.StoreKey, listeners []types.WriteListener) types.CacheWrap { - return cachekv.NewStore(listenkv.NewStore(s, storeKey, listeners)) -} diff --git a/store/v2/smt/store.go b/store/v2/smt/store.go deleted file mode 100644 index ce4130174337..000000000000 --- a/store/v2/smt/store.go +++ /dev/null @@ -1,99 +0,0 @@ -package smt - -import ( - "crypto/sha256" - "errors" - - "github.com/cosmos/cosmos-sdk/store/types" - tmcrypto "github.com/tendermint/tendermint/proto/tendermint/crypto" - - "github.com/lazyledger/smt" -) - -var ( - _ types.BasicKVStore = (*Store)(nil) -) - -var ( - errKeyEmpty = errors.New("key is empty or nil") - errValueNil = errors.New("value is nil") -) - -// Store Implements types.KVStore and CommitKVStore. -type Store struct { - tree *smt.SparseMerkleTree -} - -func NewStore(nodes, values smt.MapStore) *Store { - return &Store{ - tree: smt.NewSparseMerkleTree(nodes, values, sha256.New()), - } -} - -func LoadStore(nodes, values smt.MapStore, root []byte) *Store { - return &Store{ - tree: smt.ImportSparseMerkleTree(nodes, values, sha256.New(), root), - } -} - -func (s *Store) GetProof(key []byte) (*tmcrypto.ProofOps, error) { - proof, err := s.tree.Prove(key) - if err != nil { - return nil, err - } - op := NewProofOp(s.tree.Root(), key, SHA256, proof) - return &tmcrypto.ProofOps{Ops: []tmcrypto.ProofOp{op.ProofOp()}}, nil -} - -func (s *Store) Root() []byte { return s.tree.Root() } - -// BasicKVStore interface below: - -// Get returns nil iff key doesn't exist. Panics on nil key. -func (s *Store) Get(key []byte) []byte { - if len(key) == 0 { - panic(errKeyEmpty) - } - val, err := s.tree.Get(key) - if err != nil { - panic(err) - } - return val -} - -// Has checks if a key exists. Panics on nil key. -func (s *Store) Has(key []byte) bool { - if len(key) == 0 { - panic(errKeyEmpty) - } - has, err := s.tree.Has(key) - if err != nil { - panic(err) - } - return has -} - -// Set sets the key. Panics on nil key or value. -func (s *Store) Set(key []byte, value []byte) { - if len(key) == 0 { - panic(errKeyEmpty) - } - if value == nil { - panic(errValueNil) - } - _, err := s.tree.Update(key, value) - if err != nil { - panic(err) - } -} - -// Delete deletes the key. Panics on nil key. -func (s *Store) Delete(key []byte) { - if len(key) == 0 { - panic(errKeyEmpty) - } - _, err := s.tree.Delete(key) - if err != nil { - panic(err) - } -} diff --git a/x/auth/middleware/basic.go b/x/auth/middleware/basic.go deleted file mode 100644 index 1bfd98d868d6..000000000000 --- a/x/auth/middleware/basic.go +++ /dev/null @@ -1,358 +0,0 @@ -package middleware - -import ( - "context" - - "github.com/cosmos/cosmos-sdk/codec/legacy" - "github.com/cosmos/cosmos-sdk/crypto/keys/multisig" - cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types" - sdk "github.com/cosmos/cosmos-sdk/types" - sdkerrors "github.com/cosmos/cosmos-sdk/types/errors" - "github.com/cosmos/cosmos-sdk/types/tx" - "github.com/cosmos/cosmos-sdk/types/tx/signing" - "github.com/cosmos/cosmos-sdk/x/auth/migrations/legacytx" - authsigning "github.com/cosmos/cosmos-sdk/x/auth/signing" - abci "github.com/tendermint/tendermint/abci/types" -) - -type validateBasicTxHandler struct { - next tx.Handler -} - -// ValidateBasicMiddleware will call tx.ValidateBasic, msg.ValidateBasic(for each msg inside tx) -// and return any non-nil error. -// If ValidateBasic passes, middleware calls next middleware in chain. Note, -// validateBasicTxHandler will not get executed on ReCheckTx since it -// is not dependent on application state. -func ValidateBasicMiddleware(txh tx.Handler) tx.Handler { - return validateBasicTxHandler{ - next: txh, - } -} - -var _ tx.Handler = validateBasicTxHandler{} - -// validateBasicTxMsgs executes basic validator calls for messages. -func validateBasicTxMsgs(msgs []sdk.Msg) error { - if len(msgs) == 0 { - return sdkerrors.Wrap(sdkerrors.ErrInvalidRequest, "must contain at least one message") - } - - for _, msg := range msgs { - err := msg.ValidateBasic() - if err != nil { - return err - } - } - - return nil -} - -// CheckTx implements tx.Handler.CheckTx. -func (txh validateBasicTxHandler) CheckTx(ctx context.Context, tx sdk.Tx, req abci.RequestCheckTx) (abci.ResponseCheckTx, error) { - // no need to validate basic on recheck tx, call next middleware - if req.Type == abci.CheckTxType_Recheck { - return txh.next.CheckTx(ctx, tx, req) - } - - if err := validateBasicTxMsgs(tx.GetMsgs()); err != nil { - return abci.ResponseCheckTx{}, err - } - - if err := tx.ValidateBasic(); err != nil { - return abci.ResponseCheckTx{}, err - } - - return txh.next.CheckTx(ctx, tx, req) -} - -// DeliverTx implements tx.Handler.DeliverTx. -func (txh validateBasicTxHandler) DeliverTx(ctx context.Context, tx sdk.Tx, req abci.RequestDeliverTx) (abci.ResponseDeliverTx, error) { - if err := tx.ValidateBasic(); err != nil { - return abci.ResponseDeliverTx{}, err - } - - if err := validateBasicTxMsgs(tx.GetMsgs()); err != nil { - return abci.ResponseDeliverTx{}, err - } - - return txh.next.DeliverTx(ctx, tx, req) -} - -// SimulateTx implements tx.Handler.SimulateTx. -func (txh validateBasicTxHandler) SimulateTx(ctx context.Context, sdkTx sdk.Tx, req tx.RequestSimulateTx) (tx.ResponseSimulateTx, error) { - if err := sdkTx.ValidateBasic(); err != nil { - return tx.ResponseSimulateTx{}, err - } - - if err := validateBasicTxMsgs(sdkTx.GetMsgs()); err != nil { - return tx.ResponseSimulateTx{}, err - } - - return txh.next.SimulateTx(ctx, sdkTx, req) -} - -var _ tx.Handler = txTimeoutHeightTxHandler{} - -type txTimeoutHeightTxHandler struct { - next tx.Handler -} - -// TxTimeoutHeightMiddleware defines a middleware that checks for a -// tx height timeout. -func TxTimeoutHeightMiddleware(txh tx.Handler) tx.Handler { - return txTimeoutHeightTxHandler{ - next: txh, - } -} - -func checkTimeout(ctx context.Context, tx sdk.Tx) error { - sdkCtx := sdk.UnwrapSDKContext(ctx) - timeoutTx, ok := tx.(sdk.TxWithTimeoutHeight) - if !ok { - return sdkerrors.Wrap(sdkerrors.ErrTxDecode, "expected tx to implement TxWithTimeoutHeight") - } - - timeoutHeight := timeoutTx.GetTimeoutHeight() - if timeoutHeight > 0 && uint64(sdkCtx.BlockHeight()) > timeoutHeight { - return sdkerrors.Wrapf( - sdkerrors.ErrTxTimeoutHeight, "block height: %d, timeout height: %d", sdkCtx.BlockHeight(), timeoutHeight, - ) - } - - return nil -} - -// CheckTx implements tx.Handler.CheckTx. -func (txh txTimeoutHeightTxHandler) CheckTx(ctx context.Context, tx sdk.Tx, req abci.RequestCheckTx) (abci.ResponseCheckTx, error) { - if err := checkTimeout(ctx, tx); err != nil { - return abci.ResponseCheckTx{}, err - } - - return txh.next.CheckTx(ctx, tx, req) -} - -// DeliverTx implements tx.Handler.DeliverTx. -func (txh txTimeoutHeightTxHandler) DeliverTx(ctx context.Context, tx sdk.Tx, req abci.RequestDeliverTx) (abci.ResponseDeliverTx, error) { - if err := checkTimeout(ctx, tx); err != nil { - return abci.ResponseDeliverTx{}, err - } - - return txh.next.DeliverTx(ctx, tx, req) -} - -// SimulateTx implements tx.Handler.SimulateTx. -func (txh txTimeoutHeightTxHandler) SimulateTx(ctx context.Context, sdkTx sdk.Tx, req tx.RequestSimulateTx) (tx.ResponseSimulateTx, error) { - if err := checkTimeout(ctx, sdkTx); err != nil { - return tx.ResponseSimulateTx{}, err - } - - return txh.next.SimulateTx(ctx, sdkTx, req) -} - -type validateMemoTxHandler struct { - ak AccountKeeper - next tx.Handler -} - -// ValidateMemoMiddleware will validate memo given the parameters passed in -// If memo is too large middleware returns with error, otherwise call next middleware -// CONTRACT: Tx must implement TxWithMemo interface -func ValidateMemoMiddleware(ak AccountKeeper) tx.Middleware { - return func(txHandler tx.Handler) tx.Handler { - return validateMemoTxHandler{ - ak: ak, - next: txHandler, - } - } -} - -var _ tx.Handler = validateMemoTxHandler{} - -func (vmm validateMemoTxHandler) checkForValidMemo(ctx context.Context, tx sdk.Tx) error { - sdkCtx := sdk.UnwrapSDKContext(ctx) - memoTx, ok := tx.(sdk.TxWithMemo) - if !ok { - return sdkerrors.Wrap(sdkerrors.ErrTxDecode, "invalid transaction type") - } - - params := vmm.ak.GetParams(sdkCtx) - - memoLength := len(memoTx.GetMemo()) - if uint64(memoLength) > params.MaxMemoCharacters { - return sdkerrors.Wrapf(sdkerrors.ErrMemoTooLarge, - "maximum number of characters is %d but received %d characters", - params.MaxMemoCharacters, memoLength, - ) - } - - return nil -} - -// CheckTx implements tx.Handler.CheckTx method. -func (vmm validateMemoTxHandler) CheckTx(ctx context.Context, tx sdk.Tx, req abci.RequestCheckTx) (abci.ResponseCheckTx, error) { - if err := vmm.checkForValidMemo(ctx, tx); err != nil { - return abci.ResponseCheckTx{}, err - } - - return vmm.next.CheckTx(ctx, tx, req) -} - -// DeliverTx implements tx.Handler.DeliverTx method. -func (vmm validateMemoTxHandler) DeliverTx(ctx context.Context, tx sdk.Tx, req abci.RequestDeliverTx) (abci.ResponseDeliverTx, error) { - if err := vmm.checkForValidMemo(ctx, tx); err != nil { - return abci.ResponseDeliverTx{}, err - } - - return vmm.next.DeliverTx(ctx, tx, req) -} - -// SimulateTx implements tx.Handler.SimulateTx method. -func (vmm validateMemoTxHandler) SimulateTx(ctx context.Context, sdkTx sdk.Tx, req tx.RequestSimulateTx) (tx.ResponseSimulateTx, error) { - if err := vmm.checkForValidMemo(ctx, sdkTx); err != nil { - return tx.ResponseSimulateTx{}, err - } - - return vmm.next.SimulateTx(ctx, sdkTx, req) -} - -var _ tx.Handler = consumeTxSizeGasTxHandler{} - -type consumeTxSizeGasTxHandler struct { - ak AccountKeeper - next tx.Handler -} - -// ConsumeTxSizeGasMiddleware will take in parameters and consume gas proportional -// to the size of tx before calling next middleware. Note, the gas costs will be -// slightly over estimated due to the fact that any given signing account may need -// to be retrieved from state. -// -// CONTRACT: If simulate=true, then signatures must either be completely filled -// in or empty. -// CONTRACT: To use this middleware, signatures of transaction must be represented -// as legacytx.StdSignature otherwise simulate mode will incorrectly estimate gas cost. -func ConsumeTxSizeGasMiddleware(ak AccountKeeper) tx.Middleware { - return func(txHandler tx.Handler) tx.Handler { - return consumeTxSizeGasTxHandler{ - ak: ak, - next: txHandler, - } - } -} - -func (cgts consumeTxSizeGasTxHandler) simulateSigGasCost(ctx context.Context, tx sdk.Tx) error { - sdkCtx := sdk.UnwrapSDKContext(ctx) - params := cgts.ak.GetParams(sdkCtx) - - sigTx, ok := tx.(authsigning.SigVerifiableTx) - if !ok { - return sdkerrors.Wrap(sdkerrors.ErrTxDecode, "invalid tx type") - } - - // in simulate mode, each element should be a nil signature - sigs, err := sigTx.GetSignaturesV2() - if err != nil { - return err - } - n := len(sigs) - - for i, signer := range sigTx.GetSigners() { - // if signature is already filled in, no need to simulate gas cost - if i < n && !isIncompleteSignature(sigs[i].Data) { - continue - } - - var pubkey cryptotypes.PubKey - - acc := cgts.ak.GetAccount(sdkCtx, signer) - - // use placeholder simSecp256k1Pubkey if sig is nil - if acc == nil || acc.GetPubKey() == nil { - pubkey = simSecp256k1Pubkey - } else { - pubkey = acc.GetPubKey() - } - - // use stdsignature to mock the size of a full signature - simSig := legacytx.StdSignature{ //nolint:staticcheck // this will be removed when proto is ready - Signature: simSecp256k1Sig[:], - PubKey: pubkey, - } - - sigBz := legacy.Cdc.MustMarshal(simSig) - cost := sdk.Gas(len(sigBz) + 6) - - // If the pubkey is a multi-signature pubkey, then we estimate for the maximum - // number of signers. - if _, ok := pubkey.(*multisig.LegacyAminoPubKey); ok { - cost *= params.TxSigLimit - } - - sdkCtx.GasMeter().ConsumeGas(params.TxSizeCostPerByte*cost, "txSize") - } - - return nil -} - -func (cgts consumeTxSizeGasTxHandler) consumeTxSizeGas(ctx context.Context, _ sdk.Tx, txBytes []byte, simulate bool) error { - sdkCtx := sdk.UnwrapSDKContext(ctx) - params := cgts.ak.GetParams(sdkCtx) - sdkCtx.GasMeter().ConsumeGas(params.TxSizeCostPerByte*sdk.Gas(len(txBytes)), "txSize") - - return nil -} - -// CheckTx implements tx.Handler.CheckTx. -func (cgts consumeTxSizeGasTxHandler) CheckTx(ctx context.Context, tx sdk.Tx, req abci.RequestCheckTx) (abci.ResponseCheckTx, error) { - if err := cgts.consumeTxSizeGas(ctx, tx, req.GetTx(), false); err != nil { - return abci.ResponseCheckTx{}, err - } - - return cgts.next.CheckTx(ctx, tx, req) -} - -// DeliverTx implements tx.Handler.DeliverTx. -func (cgts consumeTxSizeGasTxHandler) DeliverTx(ctx context.Context, tx sdk.Tx, req abci.RequestDeliverTx) (abci.ResponseDeliverTx, error) { - if err := cgts.consumeTxSizeGas(ctx, tx, req.GetTx(), false); err != nil { - return abci.ResponseDeliverTx{}, err - } - - return cgts.next.DeliverTx(ctx, tx, req) -} - -// SimulateTx implements tx.Handler.SimulateTx. -func (cgts consumeTxSizeGasTxHandler) SimulateTx(ctx context.Context, sdkTx sdk.Tx, req tx.RequestSimulateTx) (tx.ResponseSimulateTx, error) { - if err := cgts.consumeTxSizeGas(ctx, sdkTx, req.TxBytes, true); err != nil { - return tx.ResponseSimulateTx{}, err - } - - if err := cgts.simulateSigGasCost(ctx, sdkTx); err != nil { - return tx.ResponseSimulateTx{}, err - } - - return cgts.next.SimulateTx(ctx, sdkTx, req) -} - -// isIncompleteSignature tests whether SignatureData is fully filled in for simulation purposes -func isIncompleteSignature(data signing.SignatureData) bool { - if data == nil { - return true - } - - switch data := data.(type) { - case *signing.SingleSignatureData: - return len(data.Signature) == 0 - case *signing.MultiSignatureData: - if len(data.Signatures) == 0 { - return true - } - for _, s := range data.Signatures { - if isIncompleteSignature(s) { - return true - } - } - } - - return false -} diff --git a/x/epoching/keeper/keeper.go b/x/epoching/keeper/keeper.go deleted file mode 100644 index f6869f50e8f6..000000000000 --- a/x/epoching/keeper/keeper.go +++ /dev/null @@ -1,192 +0,0 @@ -package keeper - -import ( - "time" - - "github.com/cosmos/cosmos-sdk/codec" - codectypes "github.com/cosmos/cosmos-sdk/codec/types" - storetypes "github.com/cosmos/cosmos-sdk/store/types" - sdk "github.com/cosmos/cosmos-sdk/types" - db "github.com/tendermint/tm-db" -) - -const ( - DefaultEpochActionID = 1 - DefaultEpochNumber = 0 -) - -var ( - NextEpochActionID = []byte{0x11} - EpochNumberID = []byte{0x12} - EpochActionQueuePrefix = []byte{0x13} // prefix for the epoch -) - -// Keeper of the store -type Keeper struct { - storeKey storetypes.StoreKey - cdc codec.BinaryCodec - // Used to calculate the estimated next epoch time. - // This is local to every node - // TODO: remove in favor of consensus param when its added - commitTimeout time.Duration -} - -// NewKeeper creates a epoch queue manager -func NewKeeper(cdc codec.BinaryCodec, key storetypes.StoreKey, commitTimeout time.Duration) Keeper { - return Keeper{ - storeKey: key, - cdc: cdc, - commitTimeout: commitTimeout, - } -} - -// GetNewActionID returns ID to be used for next epoch -func (k Keeper) GetNewActionID(ctx sdk.Context) uint64 { - store := ctx.KVStore(k.storeKey) - - bz := store.Get(NextEpochActionID) - if bz == nil { - // return default action ID to 1 - return DefaultEpochActionID - } - id := sdk.BigEndianToUint64(bz) - - // increment next action ID - store.Set(NextEpochActionID, sdk.Uint64ToBigEndian(id+1)) - - return id -} - -// ActionStoreKey returns action store key from ID -func ActionStoreKey(epochNumber int64, actionID uint64) []byte { - return append(EpochActionQueuePrefix, byte(epochNumber), byte(actionID)) -} - -// QueueMsgForEpoch save the actions that need to be executed on next epoch -func (k Keeper) QueueMsgForEpoch(ctx sdk.Context, epochNumber int64, msg sdk.Msg) { - store := ctx.KVStore(k.storeKey) - - bz, err := k.cdc.MarshalInterface(msg) - if err != nil { - panic(err) - } - - actionID := k.GetNewActionID(ctx) - store.Set(ActionStoreKey(epochNumber, actionID), bz) -} - -// RestoreEpochAction restore the actions that need to be executed on next epoch -func (k Keeper) RestoreEpochAction(ctx sdk.Context, epochNumber int64, action *codectypes.Any) { - store := ctx.KVStore(k.storeKey) - - // reference from TestMarshalAny(t *testing.T) - bz, err := k.cdc.MarshalInterface(action) - if err != nil { - panic(err) - } - - actionID := k.GetNewActionID(ctx) - store.Set(ActionStoreKey(epochNumber, actionID), bz) -} - -// GetEpochMsg gets a msg by ID -func (k Keeper) GetEpochMsg(ctx sdk.Context, epochNumber int64, actionID uint64) sdk.Msg { - store := ctx.KVStore(k.storeKey) - - bz := store.Get(ActionStoreKey(epochNumber, actionID)) - if bz == nil { - return nil - } - - var action sdk.Msg - k.cdc.UnmarshalInterface(bz, &action) - - return action -} - -// GetEpochActions get all actions -func (k Keeper) GetEpochActions(ctx sdk.Context) []sdk.Msg { - actions := []sdk.Msg{} - iterator := k.GetEpochActionsIterator(ctx) - defer iterator.Close() - - for ; iterator.Valid(); iterator.Next() { - var action sdk.Msg - bz := iterator.Value() - k.cdc.UnmarshalInterface(bz, &action) - actions = append(actions, action) - } - - return actions -} - -// GetEpochActionsIterator returns iterator for EpochActions -func (k Keeper) GetEpochActionsIterator(ctx sdk.Context) db.Iterator { - return sdk.KVStorePrefixIterator(ctx.KVStore(k.storeKey), EpochActionQueuePrefix) -} - -// DequeueEpochActions dequeue all the actions store on epoch -func (k Keeper) DequeueEpochActions(ctx sdk.Context) { - store := ctx.KVStore(k.storeKey) - iterator := sdk.KVStorePrefixIterator(store, EpochActionQueuePrefix) - defer iterator.Close() - - for ; iterator.Valid(); iterator.Next() { - key := iterator.Key() - store.Delete(key) - } -} - -// DeleteByKey delete item by key -func (k Keeper) DeleteByKey(ctx sdk.Context, key []byte) { - store := ctx.KVStore(k.storeKey) - store.Delete(key) -} - -// GetEpochActionByIterator get action by iterator -func (k Keeper) GetEpochActionByIterator(iterator db.Iterator) sdk.Msg { - bz := iterator.Value() - - var action sdk.Msg - k.cdc.UnmarshalInterface(bz, &action) - - return action -} - -// SetEpochNumber set epoch number -func (k Keeper) SetEpochNumber(ctx sdk.Context, epochNumber int64) { - store := ctx.KVStore(k.storeKey) - store.Set(EpochNumberID, sdk.Uint64ToBigEndian(uint64(epochNumber))) -} - -// GetEpochNumber fetches epoch number -func (k Keeper) GetEpochNumber(ctx sdk.Context) int64 { - store := ctx.KVStore(k.storeKey) - - bz := store.Get(EpochNumberID) - if bz == nil { - return DefaultEpochNumber - } - - return int64(sdk.BigEndianToUint64(bz)) -} - -// IncreaseEpochNumber increases epoch number -func (k Keeper) IncreaseEpochNumber(ctx sdk.Context) { - epochNumber := k.GetEpochNumber(ctx) - k.SetEpochNumber(ctx, epochNumber+1) -} - -// GetNextEpochHeight returns next epoch block height -func (k Keeper) GetNextEpochHeight(ctx sdk.Context, epochInterval int64) int64 { - currentHeight := ctx.BlockHeight() - return currentHeight + (epochInterval - currentHeight%epochInterval) -} - -// GetNextEpochTime returns estimated next epoch time -func (k Keeper) GetNextEpochTime(ctx sdk.Context, epochInterval int64) time.Time { - currentTime := ctx.BlockTime() - currentHeight := ctx.BlockHeight() - - return currentTime.Add(k.commitTimeout * time.Duration(k.GetNextEpochHeight(ctx, epochInterval)-currentHeight)) -} diff --git a/x/epoching/spec/03_to_improve.md b/x/epoching/spec/03_to_improve.md deleted file mode 100644 index 5ee5bd2ad0d5..000000000000 --- a/x/epoching/spec/03_to_improve.md +++ /dev/null @@ -1,44 +0,0 @@ - - -# Changes to make - -## Validator self-unbonding (which exceed minimum self delegation) could be required to start instantly - -Cases that trigger unbonding process - -- Validator undelegate can unbond more tokens than his minimum_self_delegation and it will automatically turn the validator into unbonding -In this case, unbonding should start instantly. -- Validator miss blocks and get slashed -- Validator get slashed for double sign - -**Note:** When a validator begins the unbonding process, it could be required to turn the validator into unbonding state instantly. - This is different than a specific delegator beginning to unbond. A validator beginning to unbond means that it's not in the set any more. - A delegator unbonding from a validator removes their delegation from the validator. - -## Pending development - -```go -// Changes to make -// — Implement correct next epoch time calculation -// — For validator self undelegation, it could be required to do start on end blocker -// — Implement TODOs on the PR #46 -// Implement CLI commands for querying -// — BufferedValidators -// — BufferedMsgCreateValidatorQueue, BufferedMsgEditValidatorQueue -// — BufferedMsgUnjailQueue, BufferedMsgDelegateQueue, BufferedMsgRedelegationQueue, BufferedMsgUndelegateQueue -// Write epoch related tests with new scenarios -// — Simulation test is important for finding bugs [Ask Dev for questions) -// — Can easily add a simulator check to make sure all delegation amounts in queue add up to the same amount that’s in the EpochUnbondedPool -// — I’d like it added as an invariant test for the simulator -// — the simulator should check that the sum of all the queued delegations always equals the amount kept track in the data -// — Staking/Slashing/Distribution module params are being modified by governance based on vote result instantly. We should test the effect. -// — — Should test to see what would happen if max_validators is changed though, in the middle of an epoch -// — we should define some new invariants that help check that everything is working smoothly with these new changes for 3 modules e.g. https://github.com/cosmos/cosmos-sdk/blob/master/x/staking/keeper/invariants.go -// — — Within Epoch, ValidationPower = ValidationPower - SlashAmount -// — — When epoch actions queue is empty, EpochDelegationPool balance should be zero -// — we should count all the delegation changes that happen during the epoch, and then make sure that the resulting change at the end of the epoch is actually correct -// — If the validator that I delegated to double signs at block 16, I should still get slashed instantly because even though I asked to unbond at 14, they still used my power at block 16, I should only be not liable for slashes once my power is stopped being used -// — On the converse of this, I should still be getting rewards while my power is being used. I shouldn’t stop receiving rewards until block 20 -``` diff --git a/x/group/internal/orm/spec/01_table.md b/x/group/internal/orm/spec/01_table.md deleted file mode 100644 index 7b159b482dc1..000000000000 --- a/x/group/internal/orm/spec/01_table.md +++ /dev/null @@ -1,40 +0,0 @@ -# Table - -A table can be built given a `codec.ProtoMarshaler` model type, a prefix to access the underlying prefix store used to store table data as well as a `Codec` for marshalling/unmarshalling. - -+++ https://github.com/cosmos/cosmos-sdk/blob/9f78f16ae75cc42fc5fe636bde18a453ba74831f/x/group/internal/orm/table.go#L24-L30 - -In the prefix store, entities should be stored by an unique identifier called `RowID` which can be based either on an `uint64` auto-increment counter, string or dynamic size bytes. -Regular CRUD operations can be performed on a table, these methods take a `sdk.KVStore` as parameter to get the table prefix store. - -The `table` struct does not: - -- enforce uniqueness of the `RowID` -- enforce prefix uniqueness of keys, i.e. not allowing one key to be a prefix - of another -- optimize Gas usage conditions -The `table` struct is private, so that we only have custom tables built on top of it, that do satisfy these requirements. - -## AutoUInt64Table - -`AutoUInt64Table` is a table type with an auto incrementing `uint64` ID. - -+++ https://github.com/cosmos/cosmos-sdk/blob/9f78f16ae75cc42fc5fe636bde18a453ba74831f/x/group/internal/orm/auto_uint64.go#L11-L14 - -It's based on the `Sequence` struct which is a persistent unique key generator based on a counter encoded using 8 byte big endian. - -## PrimaryKeyTable - -`PrimaryKeyTable` provides simpler object style orm methods where are persisted and loaded with a reference to their unique primary key. - -The model provided for creating a `PrimaryKeyTable` should implement the `PrimaryKeyed` interface: - -+++ https://github.com/cosmos/cosmos-sdk/blob/9f78f16ae75cc42fc5fe636bde18a453ba74831f/x/group/internal/orm/primary_key.go#L28-L41 - -`PrimaryKeyFields()` method returns the list of key parts for a given object. -The primary key parts can be []byte, string, and `uint64` types. - Key parts, except the last part, follow these rules: - -- []byte is encoded with a single byte length prefix -- strings are null-terminated -- `uint64` are encoded using 8 byte big endian.