From aed7f767bb399d36a1db1922d9c028663297e052 Mon Sep 17 00:00:00 2001
From: Dmitrii Golubev
Date: Wed, 8 Jun 2022 14:28:32 +0200
Subject: [PATCH] backport: merge result of tendermint/master with v0.8-dev
(#376)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* update abci cli test output (#8070)
* consensus: fix TestInvalidState race and reporting (#8071)
* proxy: collapse triforcated abci.Client (#8067)
* build(deps): Bump actions/checkout from 2.4.0 to 3 (#8076)
Bumps [actions/checkout](https://github.com/actions/checkout) from 2.4.0 to 3.
Release notes
Sourced from actions/checkout's releases.
v3.0.0
- Update default runtime to node16
Changelog
Sourced from actions/checkout's changelog.
Changelog
v2.3.1
v2.3.0
v2.2.0
v2.1.1
v2.1.0
v2.0.0
v2 (beta)
- Improved fetch performance
- The default behavior now fetches only the SHA being checked-out
- Script authenticated git commands
- Persists
with.token
in the local git config
- Enables your scripts to run authenticated git commands
- Post-job cleanup removes the token
- Coming soon: Opt out by setting
with.persist-credentials
to false
- Creates a local branch
- No longer detached HEAD when checking out a branch
- A local branch is created with the corresponding upstream branch set
- Improved layout
... (truncated)
Commits
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=2.4.0&new-version=3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* build(deps): Bump docker/login-action from 1.13.0 to 1.14.1 (#8075)
Bumps [docker/login-action](https://github.com/docker/login-action) from 1.13.0 to 1.14.1.
Release notes
Sourced from docker/login-action's releases.
v1.14.1
- Revert to Node 12 as default runtime to fix issue for GHE users (#160)
v1.14.0
- Update to node 16 (#158)
- Bump
@aws-sdk/client-ecr
from 3.45.0 to 3.53.0 (#157)
- Bump
@aws-sdk/client-ecr-public
from 3.45.0 to 3.53.0 (#156)
Commits
dd4fa06
Merge pull request #160 from crazy-max/node12
4e35385
Revert to Node 12 as default runtime
bb984ef
Merge pull request #156 from docker/dependabot/npm_and_yarn/aws-sdk/client-ec...
7228881
Update generated content
17780b5
Bump @aws-sdk/client-ecr-public
from 3.45.0 to 3.53.0
39857b3
Merge pull request #157 from docker/dependabot/npm_and_yarn/aws-sdk/client-ec...
5fcc728
Update generated content
9fb8721
Bump @aws-sdk/client-ecr
from 3.45.0 to 3.53.0
4e3c937
Merge pull request #158 from crazy-max/node-16
4b59a42
update to node 16
- See full diff in compare view
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/login-action&package-manager=github_actions&previous-version=1.13.0&new-version=1.14.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* build(deps): Bump golangci/golangci-lint-action from 2.5.2 to 3.1.0 (#8074)
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 2.5.2 to 3.1.0.
Release notes
Sourced from golangci/golangci-lint-action's releases.
v3.1.0
What's Changed
New features
CI
Dependabot
Misc
New Contributors
Full Changelog: https://github.com/golangci/golangci-lint-action/compare/v3...v3.1.0
v3.0.0
What's Changed
New Contributors
Full Changelog: https://github.com/golangci/golangci-lint-action/compare/v2...v3.0.0
Commits
b517f99
fix version in package-lock.json (#407)
9636c5b
Update version to 3.1.0 in package.json (#406)
03e4bef
ci(dep): Add step to commit changes if PR has dependencies label (#108)
cdfc708
Allow to disable caching completely (#351)
7d5614c
build(deps-dev): bump eslint from 8.9.0 to 8.10.0 (#405)
c675eb7
Update all direct dependencies (#404)
423fbaf
Remove Setup-Go (#403)
bcfc6f9
build(deps-dev): bump eslint-plugin-import from 2.25.3 to 2.25.4 (#402)
d34ac2a
build(deps): bump setup-go from v2.1.4 to v2.2.0 (#401)
e4b538e
build(deps-dev): bump @types/node
from 16.11.10 to 17.0.19 (#400)
- Additional commits viewable in compare view
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=golangci/golangci-lint-action&package-manager=github_actions&previous-version=2.5.2&new-version=3.1.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* Fix govet errors for %w use in test errors. (#8083)
The %w syntax is a fmt.Errorf thing, not supported by the testing package.
* evidence: manage and initialize state objects more clearly in the pool (#8080)
* statesync: avoid leaking a thread during tests (#8085)
* statesync: avoid leaking a thread during tests
* fix
* Fix YAML front matter. (#8086)
Fixes #8052 again. Ideally we would have some way of detecting that this
happens before merging, but the way we build docs right now is kind of
complicated.
* abci++ spec: reorganizing basic concepts, adding outline for easy navigation (#8048)
* reorganizing basic concepts, adding outline to navigate easy
* Update spec/abci++/README.md
Co-authored-by: Sergio Mena
* Update spec/abci++/abci++_basic_concepts_002_draft.md
Co-authored-by: Sergio Mena
* Update spec/abci++/abci++_basic_concepts_002_draft.md
Co-authored-by: Sergio Mena
* Update spec/abci++/abci++_basic_concepts_002_draft.md
Co-authored-by: Sergio Mena
* Update spec/abci++/abci++_basic_concepts_002_draft.md
Co-authored-by: Sergio Mena
* Update spec/abci++/abci++_basic_concepts_002_draft.md
Co-authored-by: Sergio Mena
* Update spec/abci++/abci++_basic_concepts_002_draft.md
Co-authored-by: Sergio Mena
* Update spec/abci++/abci++_basic_concepts_002_draft.md
Co-authored-by: Sergio Mena
* Update spec/abci++/abci++_basic_concepts_002_draft.md
Co-authored-by: Sergio Mena
* Update spec/abci++/abci++_basic_concepts_002_draft.md
Co-authored-by: M. J. Fromberger
* Update spec/abci++/abci++_basic_concepts_002_draft.md
Co-authored-by: M. J. Fromberger
* Update spec/abci++/abci++_basic_concepts_002_draft.md
Co-authored-by: M. J. Fromberger
* address problem with snapshot list data type
* Update spec/abci++/abci++_basic_concepts_002_draft.md
Co-authored-by: M. J. Fromberger
* Update spec/abci++/abci++_basic_concepts_002_draft.md
Co-authored-by: M. J. Fromberger
* Update spec/abci++/abci++_basic_concepts_002_draft.md
Co-authored-by: M. J. Fromberger
* Update spec/abci++/abci++_basic_concepts_002_draft.md
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
* Update spec/abci++/abci++_basic_concepts_002_draft.md
Co-authored-by: M. J. Fromberger
* clarify handling events in same-execution model
* remove outdated text about vote extension singing
* clarification apphash state-sync
Co-authored-by: Sergio Mena
Co-authored-by: M. J. Fromberger
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
* node: pass eventbus at construction time (#8084)
* node: pass eventbus at construction time
* remove cruft
* cmd: make reset more safe (#8081)
* add safe reset
* undo change
* remove unsafe
* Update cmd/tendermint/commands/reset_priv_validator.go
Co-authored-by: Thane Thomson
* Update cmd/tendermint/commands/reset_priv_validator.go
Co-authored-by: M. J. Fromberger
* remove export comment
Co-authored-by: Thane Thomson
Co-authored-by: M. J. Fromberger
* Update pending changelog for #8081. (#8093)
* service: add NopService and use for PexReactor (#8100)
* build(deps): Bump google.golang.org/grpc from 1.44.0 to 1.45.0 (#8104)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.44.0 to 1.45.0.
Release notes
Sourced from google.golang.org/grpc's releases.
Release 1.45.0
Bug Fixes
- xds/clusterresolver: pass cluster name to DNS child policy to be used in creds handshake (#5119)
- reflection: support dynamic messages (#5180)
Performance Improvements
- wrr: improve randomWRR performance (#5067)
Behavior Changes
- server: convert context errors returned by service handlers to status with the correct status code (
Canceled
or DeadlineExceeded
), instead of Unknown
(#5156)
New Features
- reflection: add
NewServer(ServerOptions)
for creating a reflection server with advanced customizations (#5197)
- xds: support federation (#5128)
- xds/resource: accept Self as LDS's RDS config source and CDS's EDS config source (#5152)
- xds/bootstrap: add plugin system for credentials specified in bootstrap file (#5136)
Commits
a82cc96
Change version to 1.45.0 (#5202)
011544f
authz: add additional logs to sdk authz (#5094)
18564ff
reflection: improve server implementation (#5197)
ec717ca
xds: minor cleanup in xdsclient bootstrap code (#5195)
ebc30b8
reflection: use protobuf/reflect instead of go reflection, fix dynamic messag...
46009ac
transport: Add an Unwrap method to ConnectionError (#5148)
75fd024
remove sdk term from grpc authz (#5191)
a354b1e
channelz: rename NewChannelzStorage to NewChannelzStorageForTesting (#5190)
0e05549
Format directory/file references (#5184)
c44f627
cleanup: replace grpc.WithInsecure with insecure.NewCredentials (#5177)
- Additional commits viewable in compare view
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=google.golang.org/grpc&package-manager=go_modules&previous-version=1.44.0&new-version=1.45.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* p2p+flowrate: rate control refactor (#7828)
Adding `CurrentTransferRate ` in the flowrate package because only the status of the transfer rate has been used.
* build(deps): Bump github.com/spf13/cobra from 1.3.0 to 1.4.0 (#8109)
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.3.0 to 1.4.0.
Release notes
Sourced from github.com/spf13/cobra's releases.
v1.4.0
Winter 2022 Release ❄️
Another season, another release!
Goodbye viper! 🐍 🚀
The core Cobra library no longer requires Viper and all of its indirect dependencies. This means that Cobra's dependency tree has been drastically thinned! The Viper dependency was included because of the cobra
CLI generation tool. This tool has migrated to spf13/cobra-cli
.
It's pretty unlikely you were importing and using the bootstrapping CLI tool as part of your application (after all, it's just a tool to get going with core cobra
).
But if you were, replace occurrences of
"github.com/spf13/cobra/cobra"
with
"github.com/spf13/cobra-cli"
And in your go.mod
, you'll want to also include this dependency:
github.com/spf13/cobra-cli v1.3.0
Again, the maintainers do not anticipate this being a breaking change to users of the core cobra
library, so minimal work should be required for users to integrate with this new release. Moreover, this means the dependency tree for your application using Cobra should no longer require dependencies that were inherited from Viper. Huzzah! 🥳
If you'd like to read more
Documentation 📝
Other 💭
Shoutout to our awesome contributors helping to make this cobra release possible!!
@spf13
@marckhouzam
@johnSchnake
@jpmcb
@liggitt
@umarcor
@hiljusti
@marians
@shyim
@htroisi
Changelog
Sourced from github.com/spf13/cobra's changelog.
Cobra Changelog
v1.1.3
- Fix: release-branch.cobra1.1 only: Revert "Deprecate Go < 1.14" to maintain backward compatibility
v1.1.2
Notable Changes
- Bump license year to 2021 in golden files (#1309)
@Bowbaq
- Enhance PowerShell completion with custom comp (#1208)
@Luap99
- Update gopkg.in/yaml.v2 to v2.4.0: The previous breaking change in yaml.v2 v2.3.0 has been reverted, see go-yaml/yaml#670
- Documentation readability improvements (#1228 etc.)
@zaataylor
etc.
- Use golangci-lint: Repair warnings and errors resulting from linting (#1044)
@umarcor
v1.1.1
- Fix: yaml.v2 2.3.0 contained a unintended breaking change. This release reverts to yaml.v2 v2.2.8 which has recent critical CVE fixes, but does not have the breaking changes. See spf13/cobra#1259 for context.
- Fix: correct internal formatting for go-md2man v2 (which caused man page generation to be broken). See spf13/cobra#1049 for context.
v1.1.0
Notable Changes
- Extend Go completions and revamp zsh comp (#1070)
- Fix man page doc generation - no auto generated tag when
cmd.DisableAutoGenTag = true
(#1104) @jpmcb
- Add completion for help command (#1136)
- Complete subcommands when TraverseChildren is set (#1171)
- Fix stderr printing functions (#894)
- fix: fish output redirection (#1247)
v1.0.0
Announcing v1.0.0 of Cobra. 🎉
Notable Changes
... (truncated)
Commits
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/spf13/cobra&package-manager=go_modules&previous-version=1.3.0&new-version=1.4.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* p2p: update polling interval calculation for PEX requests (#8106)
The PEX reactor has a simple feedback control mechanism to decide how often to
poll peers for peer address updates. The idea is to poll more frequently when
knowledge of the network is less, and decrease frequency as knowledge grows.
This change solves two problems:
1. It is possible in some cases we may poll a peer "too often" and get dropped
by that peer for spamming.
2. The first successful peer update with any content resets the polling timer
to a very long time (10m), meaning if we are unlucky in getting an
incomplete reply while the network is small, we may not try again for a very
long time. This may contribute to difficulties bootstrapping sync.
The main change here is to only update the interval when new information is
added to the system, and not (as before) whenever a request is sent out to a
peer. The rate computation is essentially the same as before, although the code
has been a bit simplified, and I consolidated some of the error handling so
that we don't have to check in multiple places for the same conditions.
Related changes:
- Improve error diagnostics for too-soon and overflow conditions.
- Clean up state handling in the poll interval computation.
- Pin the minimum interval avert a chance of PEX spamming a peer.
* p2p: remove unnecessary panic handling in PEX reactor (#8110)
The message handling in this reactor is all under control of the reactor
itself, and does not call out to callbacks or other externally-supplied code.
It doesn't need to check for panics.
- Remove an irrelevant channel ID check.
- Remove an unnecessary panic recovery wrapper.
* proto: update proto generation to use buf (#7975)
* Hard-code go_package option for .proto files
Signed-off-by: Thane Thomson
* Automatically relocate generated ABCI types after proto-gen
Signed-off-by: Thane Thomson
* Skip building gogoproto (i.e. only build our types)
Signed-off-by: Thane Thomson
* Remove unnecessary proto generation scripts
Signed-off-by: Thane Thomson
* Upgrade buf config from v1beta1 to v1
Signed-off-by: Thane Thomson
* Add simple proto generation script
Signed-off-by: Thane Thomson
* Replace buf-based protobuf generation with simple protoc-based approach
Signed-off-by: Thane Thomson
* Remove custom buf-based Docker image generation config and Dockerfile
Signed-off-by: Thane Thomson
* Adopt Cosmos SDK's approach to Protobuf linting and breakage checking in CI
Signed-off-by: Thane Thomson
* Suppress command echo when running proto checks
Signed-off-by: Thane Thomson
* Fix proto-check workflow YAML indentation
Signed-off-by: Thane Thomson
* Restore proto-format target
Signed-off-by: Thane Thomson
* Replace custom BASH script with make equivalent
Signed-off-by: Thane Thomson
* Remove proto linting/breaking changes CI checks after discussion today
Signed-off-by: Thane Thomson
* Remove dangling reference to CI workflow that no longer exists
Signed-off-by: Thane Thomson
* Update contributing guidelines relating to protos
Signed-off-by: Thane Thomson
* Use buf instead for generating protos
Signed-off-by: Thane Thomson
* Remove unused buf config for gogoprotobuf
Signed-off-by: Thane Thomson
* Add reminder for if we migrate fully to buf
Signed-off-by: Thane Thomson
* Restore protopackage script for #8065
Signed-off-by: Thane Thomson
* Fix permissions on protopackage script
Signed-off-by: Thane Thomson
* Update contributing guidelines to show building of protos using buf
Signed-off-by: Thane Thomson
* Fix breaking changes check and add disclaimer
Signed-off-by: Thane Thomson
* Expand on contributing guidelines for clarity
Signed-off-by: Thane Thomson
* Re-remove old proto workflows
Signed-off-by: Thane Thomson
* Add buf-based proto linting workflow in CI
Signed-off-by: Thane Thomson
* Superficially reorder proto targets
Signed-off-by: Thane Thomson
* Fix proto lints
Signed-off-by: Thane Thomson
* Fix GA workflow YAML indentation
Signed-off-by: Thane Thomson
* Temporarily use forked version of mlc
Use forked version of markdown-link-check until
https://github.com/gaurav-nelson/github-action-markdown-link-check/pull/126
lands.
Signed-off-by: Thane Thomson
* Temporarily disable markdown link checker
Signed-off-by: Thane Thomson
* Remove gogo protos - superseded by version from buf registry
Signed-off-by: Thane Thomson
* consensus: ensure the node terminates on consensus failure (#8111)
Updates #8077. The panic handler for consensus currently attempts to effect a
clean shutdown, but this can leave a failed node running in an unknown state
for an arbitrary amount of time after the failure.
Since a panic at this point means consensus is already irrecoverably broken, we
should not allow the node to continue executing. After making a best effort to
shut down the writeahead log, re-panic to ensure the node will terminate before
any further state transitions are processed.
Even with this change, it is possible some transitions may occur while the
cleanup is happening. It might be preferable to abort unconditionally without
any attempt at cleanup.
Related changes:
- Clean up the creation of WAL directories.
- Filter WAL close errors at rethrow.
* Update abci++_basic_concepts_002_draft.md (#8114)
Minor Typo (nice doc!)
* minor typo in docs (#8116)
* readme: add vocdoni (#8117)
Add Vocdoni under applications section on the README.
* rfc: RFC 015 ABCI++ Tx Mutation (#8033)
This pull requests adds an RFC to discuss the proposed mechanism for transaction replacement detailed in the ABCI++ specification.
* node: cleanup evidence db (#8119)
* libs/log: remove Must constructor (#8120)
* libs/log: remove Must constructor
* Update test/e2e/node/main.go
Co-authored-by: M. J. Fromberger
* use stdlog
Co-authored-by: M. J. Fromberger
* cleanup: remove commented code (#8123)
* autofile: reduce minor panic and docs changes (#8122)
* autofile: reduce minor panic and docs changes
* fix lint
* ADR: Protocol Buffers Management (#8029)
* First draft of protobuf management ADR
Signed-off-by: Thane Thomson
* Pre-emptively add ADR to "Accepted" section in README
Signed-off-by: Thane Thomson
* Add missing prototool link
Signed-off-by: Thane Thomson
* Elaborate on positive consequences of decision
Signed-off-by: Thane Thomson
* Add clang-format GA to detailed design
Signed-off-by: Thane Thomson
* Fix broken link
Signed-off-by: Thane Thomson
* Add notes on automated docs generation
Signed-off-by: Thane Thomson
* Rewording and restructuring for clarity
Signed-off-by: Thane Thomson
* Grammatical fixes and elaborations
Signed-off-by: Thane Thomson
* Revise wording for clarity
Signed-off-by: Thane Thomson
* Address comments
Signed-off-by: Thane Thomson
* Update ADR to reflect current consensus on Buf
Signed-off-by: Thane Thomson
* Minor grammar fix
Signed-off-by: Thane Thomson
Co-authored-by: M. J. Fromberger
* abci++: synchronize PrepareProposal with the newest version of the spec (#8094)
This change implements the logic for the PrepareProposal ABCI++ method call. The main logic for creating and issuing the PrepareProposal request lives in execution.go and is tested in a set of new tests in execution_test.go. This change also updates the mempool mock to use a mockery generated version and removes much of the plumbing for the no longer used ABCIResponses.
* abci++: remove app_signed_updates (#8128)
* state: avoid panics for marshaling errors (#8125)
* build(deps): Bump github.com/stretchr/testify from 1.7.0 to 1.7.1 (#8131)
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.7.0 to 1.7.1.
Commits
083ff1c
Fixed didPanic to now detect panic(nil).
1e36bfe
Use cross Go version compatible build tag syntax
e798dc2
Add docs on 1.17 build tags
83198c2
assert: guard CanConvert call in backward compatible wrapper
087b655
assert: allow comparing time.Time
7bcf74e
fix msgAndArgs forwarding
c29de71
add tests for correct msgAndArgs forwarding
f87e2b2
Update builds
ab6dc32
fix linting errors in /assert package
edff5a0
fix funtion name
- Additional commits viewable in compare view
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/stretchr/testify&package-manager=go_modules&previous-version=1.7.0&new-version=1.7.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* libs/clist: remove unused surface area (#8134)
* libs/events: remove unneccessary unsubscription code (#8135)
The events switch code is largely vestigal and is responsible for
wiring between the consensus state machine and the consensus
reactor. While there might have been a need, historicallly to managed
these subscriptions at runtime, it's nolonger used: subscriptions are
registered during startup, and then the switch shuts down at at the
end.
Eventually the EventSwitch should be replaced by a much smaller
implementation of an eventloop in the consensus state machine, but
cutting down on the scope of the event switch will help clarify the
requirements from the consensus side.
* blocksync: drop redundant shutdown mechanisms (#8136)
* mempool: test harness should expose application (#8143)
This is minor, but I was trying to write a test and realized that the
application reference in the harness isn't actually used, which is
quite confusing.
* types: update synchrony params to match checked in proto (#8142)
The `.proto` file do not have the `nullable = false` annotation present on the `SynchronyParams` durations. This pull request updates the `SynchronyParams` to match the checked in proto files. Note, this does not make the code buildable against the latest protos. This pull request was achieved by checking out all files _not relevant_ to the `SynchronyParams` and removing the new `TimeoutParams` from the the `params.proto` file. Future updates will add these back.
This pull request also adds a `nil` check to the `pbParams.Synchrony` field in `ConsensusParamsFromProto`. Old versions of Tendermint will not have the `Synchrony` parameters filled in so this code would panic on startup.
We will fill in the empty fields with defaults, but per https://github.com/tendermint/tendermint/blob/master/docs/rfc/rfc-009-consensus-parameter-upgrades.md#only-update-hashedparams-on-hash-breaking-releases we will keep out of the hash during this release.
* docs: PBTS synchrony issues runbook (#8129)
closes: #7756
# What does this pull request change?
This pull request adds a new runbook for operators enountering errors related to the new Proposer-Based Timestamps algorithm. The goal of this runbook is to give operators a set of clear steps that they can follow if they are having issues producing blocks because of clock synchronization problems.
This pull request also renames the `*PrevoteDelay` metrics to drop the term `MessageDelay`. These metrics provide a combined view of `message_delay` + `synchrony` so the name may be confusing.
# Questions to reviewers
* Are there ways to make the set of steps clearer or are there any pieces that seem confusing?
* consensus: avoid extra close channel (#8144)
Saw this in a test panic, doesn't seem neccessary.
* Docs: abci++ typo (#8147)
* p2p: adjust max non-persistent peer score (#8137)
Guarantee persistent peers have the highest connecting priority.
The peerStore.Ranked returns an arbitrary order of peers with the same scores.
* blocksync: remove intermediate channel (#8140)
Based on local testing, I'm now convinced that this is ok, and also I think the fact that the new p2p layer has more caching and queue.
* events: remove service aspects of event switch (#8146)
* consensus: avoid persistent kvstore in tests (#8148)
* consensus: avoid race in accessing channel (#8149)
* autofile: remove vestigal close mechanism (#8150)
* types: minor cleanup of un or minimally used types (#8154)
* state: panic on ResponsePrepareProposal validation error (#8145)
* state: panic on ResponsePrepareProposal validation error
* lint++
Co-authored-by: Sam Kleinman
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
* mempool: reduce size of test (#8152)
This is failing intermittently, but it's a really simple test, and I
suspect that we're just running into thread scheduling issues on CI
nodes. I don't think making the test smaller reduces the utility of
this test.
* testing: logger cleanup (#8153)
This contains two major changes:
- Remove the legacy test logging method, and just explicitly call the
noop logger. This is just to make the test logging behavior more
coherent and clear.
- Move the logging in the light package from the testing.T logger to
the noop logger. It's really the case that we very rarely need/want
to consider test logs unless we're doing reproductions and running a
narrow set of tests.
In most cases, I (for one) prefer to run in verbose mode so I can
watch progress of tests, but I basically never need to consider
logs. If I do want to see logs, then I can edit in the testing.T
logger locally (which is what you have to do today, anyway.)
* consensus: skip channel close during shutdown (#8155)
I see this panic in tests occasionally, and I don't think there's any
need to close this channel:
- it's only sent to in one place which has a select case with a
default clause, so there's no chance of deadlocks.
- the only place we recieve from it thas a timeout.
* consensus: change lock handling in reactor and handleMsg for RoundState (forward-port #7994 #7992) (#8139)
Related to #8157
* node: always sync with the application at startup (#8159)
* build(deps): Bump gaurav-nelson/github-action-markdown-link-check from 1.0.13 to 1.0.14 (#8166)
Bumps [gaurav-nelson/github-action-markdown-link-check](https://github.com/gaurav-nelson/github-action-markdown-link-check) from 1.0.13 to 1.0.14.
Release notes
Sourced from gaurav-nelson/github-action-markdown-link-check's releases.
1.0.14
Changes
Thank you @thanethomson
@edumco
Commits
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=gaurav-nelson/github-action-markdown-link-check&package-manager=github_actions&previous-version=1.0.13&new-version=1.0.14)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* build(deps): Bump docker/build-push-action from 2.9.0 to 2.10.0 (#8167)
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 2.9.0 to 2.10.0.
Release notes
Sourced from docker/build-push-action's releases.
v2.10.0
- Add
imageid
output and use metadata to set digest
output (#569)
- Add
build-contexts
input (#563)
- Enhance outputs display (#559)
Commits
ac9327e
Merge pull request #563 from crazy-max/new-inputs
7c41daf
build-contexts
input
e115266
Merge pull request #569 from crazy-max/imageid-digest
50fa005
add imageid output and use metadata to set digest output
309fb91
Merge pull request #568 from docker/dependabot/github_actions/actions/checkout-3
db68526
Bump actions/checkout from 2 to 3
fe02965
Merge pull request #559 from crazy-max/outputs
5af8693
Enhance outputs display
- See full diff in compare view
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/build-push-action&package-manager=github_actions&previous-version=2.9.0&new-version=2.10.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* consensus: reduce size of test fixtures and logging rate (#8172)
We can reduce the size of test fixtures (which will improve test
reliability) without impacting these tests' primary role (which is
correctness.)
Also reducing these test logging will make the tests easier to read,
which whill be a good quality of life improvement for devs.
* state: propogate error from state store (#8171)
* state: propogate error from state store
* fix lint
* ABCI++: Update new protos to use enum instead of bool (#8158)
closes: #8039
This pull request updates the new ABCI++ protos to use `enum`s in place of `bool`s. `enums` may be preferred over `bool` because an `enum` can be udpated to include new statuses in the future, whereas a `bool` cannot and is fixed as just `true` or `false` over the whole lifecycle of the API.
* rollback: cleanup second node during test (#8175)
* build(deps): Bump github.com/golangci/golangci-lint from 1.44.2 to 1.45.0 (#8169)
Bumps [github.com/golangci/golangci-lint](https://github.com/golangci/golangci-lint) from 1.44.2 to 1.45.0.
Release notes
Sourced from github.com/golangci/golangci-lint's releases.
v1.45.0
Changelog
- ea1df6f1 Default to YAML when config file has no extension (#2618)
- 93a0015c build(deps): bump actions/checkout from 2 to 3 (#2643)
- 176ef3f7 build(deps): bump actions/setup-node from 2 to 3 (#2628)
- adc0d8ec build(deps): bump github.com/ashanbrown/makezero from 1.1.0 to 1.1.1 (#2621)
- 8f9bc4a7 build(deps): bump github.com/daixiang0/gci from 0.3.1 to 0.3.2 (#2640)
- 6fc688ae build(deps): bump github.com/securego/gosec/v2 from 2.9.6 to 2.10.0 (#2624)
- da08d2bd build(deps): bump github.com/shirou/gopsutil/v3 from 3.22.1 to 3.22.2 (#2641)
- 873a27e7 build(deps): bump github.com/sivchari/containedctx from 1.0.1 to 1.0.2 (#2623)
- ec952367 build(deps): bump github.com/spf13/cobra from 1.3.0 to 1.4.0 (#2646)
- 0e7233eb build(deps): bump github.com/tomarrell/wrapcheck/v2 from 2.4.0 to 2.5.0 (#2603)
- 0bcc0a3b build(deps): bump golangci/golangci-lint-action from 2.5.2 to 3.1.0 (#2627)
- 5ffadacb build(deps): bump mvdan.cc/gofumpt from 0.2.1 to 0.3.0 (#2622)
- d5ebd7eb build(deps): bump node-fetch in /.github/contributors (#2616)
- 5ddb5e7a bump github.com/daixiang0/gci to v0.3.1 (#2596)
- 56d77e2b bump github.com/denis-tingaikin/go-header from 0.4.2 to 0.4.3 (#2614)
- 2f689958 errcheck: add an option to remove default exclusions (#2607)
- 1f4c1ed9 fix: completion for fish-shell
- 0c0804c6 go1.18 support (#2438)
- 42ca6449 gofumpt: add module-path setting (#2644)
- 30c6166b revive: fix default values (#2611)
Changelog
Sourced from github.com/golangci/golangci-lint's changelog.
v1.45.0
- updated linters:
cobra
: from 1.3.0 to 1.4.0
containedctx
: from 1.0.1 to 1.0.2
errcheck
: add an option to remove default exclusions
gci
: from 0.3.1 to 0.3.2
go-header
: from 0.4.2 to 0.4.3
gofumpt
: add module-path setting
gofumpt
: from 0.2.1 to 0.3.0
gopsutil
: from 3.22.1 to 3.22.2
gosec
: from 2.9.6 to 2.10.0
makezero
: from 1.1.0 to 1.1.1
revive
: fix default values
wrapcheck
: from 2.4.0 to 2.5.0
- documentation:
- docs: add "back to the top" button
- docs: add
forbidigo
example that uses comments
- docs: improve linters page
- misc:
- go1.18 support 🎉
- Add an option to manage the targeted version of Go
- Default to YAML when config file has no extension
Commits
1f4c1ed
fix: completion for fish-shell
0c0804c
go1.18 support (#2438)
ec95236
build(deps): bump github.com/spf13/cobra from 1.3.0 to 1.4.0 (#2646)
42ca644
gofumpt: add module-path setting (#2644)
93a0015
build(deps): bump actions/checkout from 2 to 3 (#2643)
d7b28ca
build(deps): bump normalize-url from 4.5.0 to 4.5.1 in /docs (#2642)
da08d2b
build(deps): bump github.com/shirou/gopsutil/v3 from 3.22.1 to 3.22.2 (#2641)
8f9bc4a
build(deps): bump github.com/daixiang0/gci from 0.3.1 to 0.3.2 (#2640)
41646f2
build(deps): bump gatsby-plugin-manifest from 4.7.0 to 4.9.0 in /docs (#2635)
fdd7218
build(deps): bump @emotion/react
from 11.7.1 to 11.8.1 in /docs (#2634)
- Additional commits viewable in compare view
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/golangci/golangci-lint&package-manager=go_modules&previous-version=1.44.2&new-version=1.45.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* abci++: remove CheckTx call from PrepareProposal flow (#8176)
* types: add TimeoutParams into ConsensusParams structs (#8177)
* consensus: avoid panic during shutdown (#8170)
* consensus: cleanup tempfile explictly (#8184)
* consensus: add leaktest check to replay tests (#8185)
* consensus: update state machine to use the new consensus params (#8181)
* build(deps): Bump github.com/golangci/golangci-lint from 1.45.0 to 1.45.2 (#8192)
Bumps [github.com/golangci/golangci-lint](https://github.com/golangci/golangci-lint) from 1.45.0 to 1.45.2.
Release notes
Sourced from github.com/golangci/golangci-lint's releases.
v1.45.2
Changelog
- 8bdc4d3f fix: help command (#2681)
v1.45.1
Changelog
- da0a6b3b build(deps): bump actions/cache from 2.1.7 to 3 (#2674)
- e187dd8a build(deps): bump github.com/hashicorp/go-version from 1.2.1 to 1.4.0 (#2659)
- ec8d6894 build(deps): bump github.com/stretchr/testify from 1.7.0 to 1.7.1 (#2660)
- 243ec6f0 bump varnamelen to v0.6.1 (#2656)
- 8f7f44d1 depguard: reduce requirements (#2672)
- 7bbbe77e feat: automatic Go version detection (#2669)
- f0554415 fix: disable structcheck with go1.18 (#2666)
- 93feed1d fix: update base images (#2661)
Changelog
Sourced from github.com/golangci/golangci-lint's changelog.
v1.45.2
- misc:
v1.45.1
- updated linters:
interfacer
: inactivate with go1.18
govet
: inactivate unsupported analyzers (go1.18)
depguard
: reduce requirements
structcheck
: inactivate with go1.18
varnamelen
: bump from v0.6.0 to v0.6.1
- misc:
- Automatic Go version detection 🎉 (go1.18)
- docker: update base images (go1.18)
Commits
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/golangci/golangci-lint&package-manager=go_modules&previous-version=1.45.0&new-version=1.45.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* build(deps): Bump minimist from 1.2.5 to 1.2.6 in /docs (#8196)
Bumps [minimist](https://github.com/substack/minimist) from 1.2.5 to 1.2.6.
- [Release notes](https://github.com/substack/minimist/releases)
- [Commits](https://github.com/substack/minimist/compare/1.2.5...1.2.6)
---
updated-dependencies:
- dependency-name: minimist
dependency-type: indirect
...
Signed-off-by: dependabot[bot]
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* build(deps): Bump bufbuild/buf-setup-action from 1.1.0 to 1.3.0 (#8199)
* build(deps): Bump github.com/adlio/schema from 1.2.3 to 1.3.0 (#8201)
Bumps [github.com/adlio/schema](https://github.com/adlio/schema) from 1.2.3 to 1.3.0.
Release notes
Sourced from github.com/adlio/schema's releases.
1.3.0
What's Changed
New Contributors
Full Changelog: https://github.com/adlio/schema/compare/v1.2.3...v1.3.0
Commits
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/adlio/schema&package-manager=go_modules&previous-version=1.2.3&new-version=1.3.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* Update ABCI++ spec with decisions taken in the bi-weekly meeting (#8191)
* Clarify 0-length vote extensions in the spec, according to #8174
* Update spec so that Tendermnit can propose more txs than the size limit in
* Addressed Manu's comment
* Reworded size limit following Manu's suggestion
* consensus: timeout params in toml used as overrides (#8186)
Replaces the set of timeout parameters in the config.toml file with unsafe-*override versions of the corresponding ConsensusParams.Timeout field. These fields can be used for the duration of v0.36 to override the consensus param in case of emergency.
Adds a set to the ./internal/consensus/State type for correctly calculating the value of each timeout based on the set of overrides specified.
* timeout parameters take the default if not set (#8189)
* Fix empty tendermint version in Docker (#8161)
* Fix Dockerfile and scripts
* Fix docker scripts
* Remove unused scripts
* Retrigger checks
Co-authored-by: Simon Kirillov
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
* migration: remove stale seen commits (#8205)
* Re-enable markdown link checker. (#8212)
The upstream fix for link syntax has landed.
- Uncomment the workflow and bump the version.
- Add a config file to encourage retries.
* Fix broken Markdown links (#8214)
- Remove pointless Makefile and package documentation.
- Fix broken links.
* config: default indexer configuration to null (#8222)
After this change, new nodes will not have indexing enabled by default.
Test configurations will still use "kv".
* Update pending changelog and upgrading notes.
* Fix indexer config for the test app.
* Update config template and enable indexing for e2e tests.
* Document steps for updating the timeout parameters. (#8217)
closes: #8182
This pull request adds documentation to the `UPGRADING.md` file as well as a set of deprecation checks for the old timeout parameters in the `config.toml` file. It additionally documents the parameters in the `genesis.md`.
* build(deps): Bump github.com/vektra/mockery/v2 from 2.10.0 to 2.10.1 (#8226)
Bumps [github.com/vektra/mockery/v2](https://github.com/vektra/mockery) from 2.10.0 to 2.10.1.
Release notes
Sourced from github.com/vektra/mockery/v2's releases.
v2.10.1
Changelog
- fa0080c Fix config.GetSemverInfo() for Go 1.18
- 4e181be Load packages with dependencies for Go 1.18
- 232f954 Merge pull request #435 from emmanuel099/master
- b11695e Merge pull request #436 from emmanuel099/test_with_3.18
- e0e183b Test with Go 1.18
- adda07f Update README.md
- 5f5570d Update README.md
- 4fc5912 Update README.md
- fa2d82d Update README.md
Commits
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/vektra/mockery/v2&package-manager=go_modules&previous-version=2.10.0&new-version=2.10.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* state: avoid premature genericism (#8224)
* lint: bump linter version in ci (#8234)
* light: remove untracked close channel (#8228)
* e2e: Fix hashing for app + Fix logic of TestApp_Hash (#8229)
* Fix hashing of e2e App
* Fix TestApp_Hash
* CaMeL
* Update test/e2e/app/state.go
Co-authored-by: M. J. Fromberger
* for-->Eventually + if-->require
* Update test/e2e/tests/app_test.go
Co-authored-by: Sam Kleinman
* fix lint
Co-authored-by: M. J. Fromberger
Co-authored-by: Sam Kleinman
* abci++: correct max-size check to only operate on added and unmodified (#8242)
* build(deps): Bump bufbuild/buf-setup-action from 1.3.0 to 1.3.1 (#8245)
* Remove `ModifiedTxStatus` from the spec and the code (#8210)
* Outstanding abci-gen changes to 'pb.go' files
* Removed modified_tx_status from spec and protobufs
* Fix sed for OSX
* Regenerated abci protobufs with 'abci-proto-gen'
* Code changes. UTs e2e tests passing
* Recovered UT: TestPrepareProposalModifiedTxStatusFalse
* Adapted UT
* Fixed UT
* Revert "Fix sed for OSX"
This reverts commit e576708c618f0ef732498f4d348503b823b6c9e8.
* Update internal/state/execution_test.go
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
* Update abci/example/kvstore/kvstore.go
Co-authored-by: M. J. Fromberger
* Update internal/state/execution_test.go
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
* Update spec/abci++/abci++_tmint_expected_behavior_002_draft.md
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
* Addressed some comments
* Added one test that tests error at the ABCI client + Fixed some mock calls
* Addressed remaining comments
* Update abci/example/kvstore/kvstore.go
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
* Update abci/example/kvstore/kvstore.go
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
* Update abci/example/kvstore/kvstore.go
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
* Update spec/abci++/abci++_tmint_expected_behavior_002_draft.md
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
* Addressed William's latest comments
* Adressed Michael's comment
* Fixed UT
* Some md fixes
* More md fixes
* gofmt
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
Co-authored-by: M. J. Fromberger
* build(deps): Bump github.com/vektra/mockery/v2 from 2.10.1 to 2.10.2 (#8246)
Bumps [github.com/vektra/mockery/v2](https://github.com/vektra/mockery) from 2.10.1 to 2.10.2.
Release notes
Sourced from github.com/vektra/mockery/v2's releases.
v2.10.2
Changelog
- 8384e25 Merge pull request #443 from OrlovEvgeny/fix-build-go-version
- 408740d fix: golang build version
Commits
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/vektra/mockery/v2&package-manager=go_modules&previous-version=2.10.1&new-version=2.10.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* statesync: merge channel processing (#8240)
* node: remove channel and peer update initialization from construction (#8238)
* build(deps): Bump github.com/vektra/mockery/v2 from 2.10.2 to 2.10.4 (#8250)
Bumps [github.com/vektra/mockery/v2](https://github.com/vektra/mockery) from 2.10.2 to 2.10.4.
Release notes
Sourced from github.com/vektra/mockery/v2's releases.
v2.10.4
Changelog
- c943e69 Merge pull request #441 from cfstras/fix/support-more-env-keys
- ed87cf6 fix: allow configuring flags with "-" as Env var
- 17abd96 fix: unused config field
Tags
- 53114cf test: add test for env var configurations
v2.10.3
Changelog
- ee25bcf Add/update mocks
- 4703d1a Merge pull request #444 from vektra/remove_need_deps
- ba1f213 Remove packages.NeedDeps
- ed38b20 Update go.sum
Commits
c943e69
Merge pull request #441 from cfstras/fix/support-more-env-keys
4703d1a
Merge pull request #444 from vektra/remove_need_deps
ed38b20
Update go.sum
ee25bcf
Add/update mocks
ba1f213
Remove packages.NeedDeps
17abd96
fix: unused config field Tags
53114cf
test: add test for env var configurations
ed87cf6
fix: allow configuring flags with "-" as Env var
- See full diff in compare view
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/vektra/mockery/v2&package-manager=go_modules&previous-version=2.10.2&new-version=2.10.4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* build(deps): Bump github.com/BurntSushi/toml from 1.0.0 to 1.1.0 (#8251)
Bumps [github.com/BurntSushi/toml](https://github.com/BurntSushi/toml) from 1.0.0 to 1.1.0.
Release notes
Sourced from github.com/BurntSushi/toml's releases.
v1.1.0
Just a few bugfixes:
-
Skip fields with toml:"-"
even when they're unsupported types. Previously something like this would fail to encode due to func
being an unsupported type:
struct {
Str string `toml:"str"
Func func() `toml:"-"`
}
-
Multiline strings can't end with \
. This is valid:
# Valid
key = """ foo \
"""
Invalid
key = """ foo \ """
Don't quote values in TOMLMarshaler
. Previously they would always include quoting (e.g. "value"
), while the entire point of this interface is to bypass that.
Commits
891d261
Don't error out if a multiline string ends with an incomplete UTF-8 sequence
ef65e34
Don't run Unmarshal() through Decode()
573cad4
Merge pull request #347 from zhsj/fix-32
f3633f4
Fix test on 32 bit arch
551f4a5
Merge pull request #344 from lucasbutn/hotfix-341-marshaler-shouldnot-writequ...
dec5825
Removed write quote in marshal to allow write other types than strings
2249a9c
Multiline strings can't end with ""
51b22f2
Fix README
01e5516
Skip fields with toml:"-", even when they're unsupported types
87b9f05
Fix tests for older Go versions
- See full diff in compare view
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/BurntSushi/toml&package-manager=go_modules&previous-version=1.0.0&new-version=1.1.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* consensus: remove string indented function (#8257)
* p2p: inject nodeinfo into router (#8261)
* node: reorder service construction (#8262)
* Forward-port changelogs from v0.34.17 and v0.34.18 to master. (#8265)
* Forward-port changelogs from v0.34.17 and v0.34.18 to master.
* Fix broken markdown links.
* statesync: tweak test performance (#8267)
* Fix more broken Markdown links. (#8271)
* consensus: avoid panics during handshake (#8266)
There's no case where we recieve an error during handshake and don't
just return/continue, and it's at a point during startup where not
much is going on in the process, so having some classes of errors
return errors and some return panics is confusing and doesn't protect
anything.
* node: move handshake out of constructor (#8264)
* statesync+blocksync: move event publications into the sync operations (#8274)
* scmigrate: ensure target key is correctly renamed (#8276)
Prior to v0.35, the keys for seen-commit records included the applicable
height. In v0.35 and beyond, we only keep the record for the latest height,
and its key does not include the height.
Update the seen-commit migration to ensure that the record we retain after
migration is correctly renamed to omit the height from its key.
Update the test cases to check for this condition after migrating.
* Forward-port changelog for v0.34.19 to master. (#8279)
* build(deps): Bump github.com/lib/pq from 1.10.4 to 1.10.5 (#8283)
Bumps [github.com/lib/pq](https://github.com/lib/pq) from 1.10.4 to 1.10.5.
Commits
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/lib/pq&package-manager=go_modules&previous-version=1.10.4&new-version=1.10.5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* node+statesync: normalize initialization (#8275)
* rpc: add more nil checks in the status end point (#8287)
* consensus: add nil check to gossip routine (#8288)
* switch to consensus change startup ordering (#8290)
* Forward-port v0.35.3 changelog to master. (#8291)
* Fix release notes to match the prevailing style. (#8292)
* Add a tool to update old config files to the latest version (#8281)
* keymigrate: fix decoding of block-hash row keys (#8294)
* test/fuzz: update oss-fuzz build script to match reality (#8296)
p2p/pex and p2p/addrbook were deleted in 03ad7d6f20d9fd8fe83d31db168f433480552e94,
secret_connection was renamed to secretconnection in dd97ac6e1c8810a115e53a61c9a91dd40275b3fe.
* Fix a spelling error (#8297)
* build: use go install instead of go get. (#8299)
* confix: clean up and document transformations (#8301)
Right now the confix tool works up to v0.35. This change is preparation for
extending the tool to handle additional changes in v0.36.
Mostly this is adding documentation. The one functional change is to fix the
name of the moved "fast-sync" parameter, which was renamed "enable".
- Document the origin of each transformation step.
- Update fast-sync target name.
* build(deps): Bump codecov/codecov-action from 2.1.0 to 3.0.0 (#8306)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 2.1.0 to 3.0.0.
Release notes
Sourced from codecov/codecov-action's releases.
v3.0.0
Breaking Changes
- #689 Bump to node16 and small fixes
Features
- #688 Incorporate
gcov
arguments for the Codecov uploader
Dependencies
- #548 build(deps-dev): bump jest-junit from 12.2.0 to 13.0.0
- #603 [Snyk] Upgrade
@actions/core
from 1.5.0 to 1.6.0
- #628 build(deps): bump node-fetch from 2.6.1 to 3.1.1
- #634 build(deps): bump node-fetch from 3.1.1 to 3.2.0
- #636 build(deps): bump openpgp from 5.0.1 to 5.1.0
- #652 build(deps-dev): bump
@vercel/ncc
from 0.30.0 to 0.33.3
- #653 build(deps-dev): bump
@types/node
from 16.11.21 to 17.0.18
- #659 build(deps-dev): bump
@types/jest
from 27.4.0 to 27.4.1
- #667 build(deps): bump actions/checkout from 2 to 3
- #673 build(deps): bump node-fetch from 3.2.0 to 3.2.3
- #683 build(deps): bump minimist from 1.2.5 to 1.2.6
- #685 build(deps): bump
@actions/github
from 5.0.0 to 5.0.1
- #681 build(deps-dev): bump
@types/node
from 17.0.18 to 17.0.23
- #682 build(deps-dev): bump typescript from 4.5.5 to 4.6.3
- #676 build(deps): bump
@actions/exec
from 1.1.0 to 1.1.1
- #675 build(deps): bump openpgp from 5.1.0 to 5.2.1
Changelog
Sourced from codecov/codecov-action's changelog.
3.0.0
Breaking Changes
- #689 Bump to node16 and small fixes
Features
- #688 Incorporate
gcov
arguments for the Codecov uploader
Dependencies
- #548 build(deps-dev): bump jest-junit from 12.2.0 to 13.0.0
- #603 [Snyk] Upgrade
@actions/core
from 1.5.0 to 1.6.0
- #628 build(deps): bump node-fetch from 2.6.1 to 3.1.1
- #634 build(deps): bump node-fetch from 3.1.1 to 3.2.0
- #636 build(deps): bump openpgp from 5.0.1 to 5.1.0
- #652 build(deps-dev): bump
@vercel/ncc
from 0.30.0 to 0.33.3
- #653 build(deps-dev): bump
@types/node
from 16.11.21 to 17.0.18
- #659 build(deps-dev): bump
@types/jest
from 27.4.0 to 27.4.1
- #667 build(deps): bump actions/checkout from 2 to 3
- #673 build(deps): bump node-fetch from 3.2.0 to 3.2.3
- #683 build(deps): bump minimist from 1.2.5 to 1.2.6
- #685 build(deps): bump
@actions/github
from 5.0.0 to 5.0.1
- #681 build(deps-dev): bump
@types/node
from 17.0.18 to 17.0.23
- #682 build(deps-dev): bump typescript from 4.5.5 to 4.6.3
- #676 build(deps): bump
@actions/exec
from 1.1.0 to 1.1.1
- #675 build(deps): bump openpgp from 5.1.0 to 5.2.1
Commits
e3c5604
Merge pull request #689 from codecov/feat/gcov
174efc5
Update package-lock.json
6243a75
bump to 3.0.0
0d6466f
Bump to node16
d4729ee
fetch.default
351baf6
fix: bash
d8cf680
Merge pull request #675 from codecov/dependabot/npm_and_yarn/openpgp-5.2.1
b775e90
Merge pull request #676 from codecov/dependabot/npm_and_yarn/actions/exec-1.1.1
2ebc2f0
Merge pull request #682 from codecov/dependabot/npm_and_yarn/typescript-4.6.3
8e2ef2b
Merge pull request #681 from codecov/dependabot/npm_and_yarn/types/node-17.0.23
- Additional commits viewable in compare view
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=codecov/codecov-action&package-manager=github_actions&previous-version=2.1.0&new-version=3.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* build(deps): Bump actions/setup-go from 2 to 3 (#8305)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 2 to 3.
Release notes
Sourced from actions/setup-go's releases.
v3.0.0
What's Changed
Breaking Changes
With the update to Node 16, all scripts will now be run with Node 16 rather than Node 12.
This new major release removes the stable
input, so there is no need to specify additional input to use pre-release versions. This release also corrects the pre-release versions syntax to satisfy the SemVer notation (1.18.0-beta1 -> 1.18.0-beta.1, 1.18.0-rc1 -> 1.18.0-rc.1).
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v3
with:
go-version: '1.18.0-rc.1'
- run: go version
Add check-latest input
In scope of this release we add the check-latest input. If check-latest
is set to true
, the action first checks if the cached version is the latest one. If the locally cached version is not the most up-to-date, a Go version will then be downloaded from go-versions repository. By default check-latest
is set to false
.
Example of usage:
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
with:
go-version: '1.16'
check-latest: true
- run: go version
Moreover, we updated @actions/core
from 1.2.6
to 1.6.0
v2.1.5
In scope of this release we updated matchers.json
to improve the problem matcher pattern. For more information please refer to this pull request
v2.1.4
What's Changed
New Contributors
Full Changelog: https://github.com/actions/setup-go/compare/v2.1.3...v2.1.4
v2.1.3
- Updated communication with runner to use environment files rather then workflow commands
v2.1.2
This release includes vendored licenses for this action's npm dependencies.
... (truncated)
Commits
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/setup-go&package-manager=github_actions&previous-version=2&new-version=3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* build(deps): Bump actions/stale from 4 to 5 (#8304)
Bumps [actions/stale](https://github.com/actions/stale) from 4 to 5.
Release notes
Sourced from actions/stale's releases.
v5.0.0
Features
v4.1.0
Features
Changelog
Sourced from actions/stale's changelog.
Changelog
Commits
3cc1237
Merge pull request #670 from actions/thboop/node16upgrade
76e9fbc
update node version
6467b96
Update default runtime to node16
8af6051
build(deps-dev): bump jest-circus from 27.2.0 to 27.4.6 (#665)
7a7efca
Fix per issue operation count (#662)
04a1828
build(deps-dev): bump ts-jest from 27.0.5 to 27.1.2 (#641)
65ca395
build(deps-dev): bump eslint-plugin-jest from 24.4.2 to 25.3.2 (#639)
eee276c
build(deps-dev): bump prettier from 2.4.1 to 2.5.1 (#628)
6c2f9f3
Merge pull request #640 from dmitry-shibanov/v-dmshib/fix-check-dist
37323f1
fix check-dist.yml
- Additional commits viewable in compare view
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/stale&package-manager=github_actions&previous-version=4&new-version=5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* build(deps): Bump actions/download-artifact from 2 to 3 (#8302)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 2 to 3.
Release notes
Sourced from actions/download-artifact's releases.
v3.0.0
What's Changed
Breaking Changes
With the update to Node 16, all scripts will now be run with Node 16 rather than Node 12.
v2.1.0 Download Artifact
- Improved output & logging
- Fixed issue where downloading all artifacts could cause display percentages to be over 100%
- Various small bug fixes & improvements
v2.0.10
- Retry on HTTP 500 responses from the service
v2.0.9
- Fixes to proxy related issues
v2.0.8
- Improvements to retryability if an error is encountered during artifact download
v2.0.7 download-artifact
- Improved download retry-ability if a partial download is encountered
v2.0.6
Update actions/core NPM package that is used internally
v2.0.5
- Add Third Party License Information
v2.0.4
- Use the latest version of the
@actions/artifact
NPM package
v2.0.3
v2.0.2
- Support for tilde expansion
v2.0.1
- Download path output
- Improved logging
Commits
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/download-artifact&package-manager=github_actions&previous-version=2&new-version=3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* build(deps): Bump actions/upload-artifact from 2 to 3 (#8303)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 2 to 3.
Release notes
Sourced from actions/upload-artifact's releases.
v3.0.0
What's Changed
- Update default runtime to node16 (#293)
- Update package-lock.json file version to 2 (#302)
Breaking Changes
With the update to Node 16, all scripts will now be run with Node 16 rather than Node 12.
v2.3.1
Fix for empty fails on Windows failing on upload #281
v2.3.0 Upload Artifact
- Optimizations for faster uploads of larger files that are already compressed
- Significantly improved logging when there are chunked uploads
- Clarifications in logs around the upload size and prohibited characters that aren't allowed in the artifact name or any uploaded files
- Various other small bugfixes & optimizations
v2.2.4
- Retry on HTTP 500 responses from the service
v2.2.3
- Fixes for proxy related issues
v2.2.2
- Improved retryability and error handling
v2.2.1
- Update used actions/core package to the latest version
v2.2.0
- Support for artifact retention
v2.1.4
- Add Third Party License Information
v2.1.3
- Use updated version of the
@action/artifact
NPM package
v2.1.2
- Increase upload chunk size from 4MB to 8MB
- Detect case insensitive file uploads
v2.1.1
- Fix for certain symlinks not correctly being identified as directories before starting uploads
v2.1.0
- Support for uploading artifacts with multiple paths
- Support for using exclude paths
- Updates to dependencies
... (truncated)
Commits
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/upload-artifact&package-manager=github_actions&previous-version=2&new-version=3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* build(deps): Bump github.com/creachadair/tomledit from 0.0.11 to 0.0.13 (#8307)
Bumps [github.com/creachadair/tomledit](https://github.com/creachadair/tomledit) from 0.0.11 to 0.0.13.
Commits
baee445
Release v0.0.13.
8dfcc1b
Exercise insertion before comments.
97f4e85
When inserting a key, push it before block comments.
029089e
Release v0.0.12.
d226405
Test finding the global table.
34b7aad
Let FindTable return the global table with an empty name.
- See full diff in compare view
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/creachadair/tomledit&package-manager=go_modules&previous-version=0.0.11&new-version=0.0.13)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* cli: add graceful catches to SIGINT (#8308)
* Update outdated doc comment (#8309)
SetEventBus was deleted, same with the NopEventBus
* Add configuration diff tool. (#8298)
* Add diff outputs as testdata.
* Normalize all samples to kebabs.
* abci++: only include meaningful header fields in data passed-through to application (#8216)
closes: #7950
* Add configuration updates for Tendermint v0.36. (#8310)
* v36: remove [fastsync] and [blocksync] config sections
* v36: remove [blocksync], consolidate rename
* v36: remove gRPC options from [rpc]
* v36: add top-level mode setting
* v36: remove deprecated per-node consensus timeouts
* v36: remove vestigial mempool.wal-dir setting
* v36: add queue-type setting
* v36: add p2p connection limits
* v36: add or update statsync.fetchers
* v36: add statesync.use-p2p setting
* events: remove unused event code (#8313)
* service: minor cleanup of comments (#8314)
* state: remove unused weighted time (#8315)
* Remove resolved TODO comments. (#8325)
Resolved by merge of #8300.
* rpc: avoid leaking threads (#8328)
* pubsub: [minor] remove unused stub method (#8316)
OnReset was removed from the service interface and we missed deleting
this.
* Only run the markdown linter if markdown was touched. (#8337)
* Update RFC ToC for RFC-015. (#8338)
* confix: remove mempool.version in v0.36 (#8334)
* Work around markdown-link-check issues. (#8339)
Work around two issues causing the markdown link check to fail in CI.
1. https://github.com/actions/checkout/pull/760. A git permissions issue,
apparently triggered by a combination of a git change and the behaviour of
actions/checkout.
2. https://github.com/gaurav-nelson/github-action-markdown-link-check/pull/129.
Merging an updated version of the underlying package that fixes a bug in the
handling of local #anchors.
The workaround is a temporary patched fork of the link-checker action. This
should be removed once the upstream issues are addressed.
* cli: simplify resetting commands (#8312)
* confix: convert tx-index.indexer from string to array (#8342)
The format of this config value was changed in v0.35.
- Move plan to its own file (for ease of reading).
- Convert indexer string to an array if not already done.
* build(deps): Bump github.com/vektra/mockery/v2 from 2.10.4 to 2.10.6 (#8346)
Bumps [github.com/vektra/mockery/v2](https://github.com/vektra/mockery) from 2.10.4 to 2.10.6.
Release notes
Sourced from github.com/vektra/mockery/v2's releases.
v2.10.6
Changelog
- df6e689 Add PR/issue templates
- e8bf201 Add golang-1.18 note
- 54589be Merge pull request #445 from bigbluedisco/fix/bump-golang-org-x-tools
- aa25af0 fix: bump golang.org/x/tools to v0.1.10 to fix some go 1.18 issues
Commits
54589be
Merge pull request #445 from bigbluedisco/fix/bump-golang-org-x-tools
aa25af0
fix: bump golang.org/x/tools to v0.1.10 to fix some go 1.18 issues
e8bf201
Add golang-1.18 note
df6e689
Add PR/issue templates
- See full diff in compare view
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/vektra/mockery/v2&package-manager=go_modules&previous-version=2.10.4&new-version=2.10.6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* build(deps): Bump github.com/spf13/viper from 1.10.1 to 1.11.0 (#8344)
Bumps [github.com/spf13/viper](https://github.com/spf13/viper) from 1.10.1 to 1.11.0.
Release notes
Sourced from github.com/spf13/viper's releases.
v1.11.0
What's Changed
Exciting New Features 🎉
Enhancements 🚀
Bug Fixes 🐛
Breaking Changes 🛠
Dependency Updates ⬆️
New Contributors
Full Changelog: https://github.com/spf13/viper/compare/v1.10.1...v1.11.0
Commits
6986c0a
chore: update crypt
65293ec
add release note configuration
6804da7
chore!: drop Go 1.14 support
5b21ca1
fix: deprecated config
55fac10
chore: fix lint
e0bf4ac
chore: add go 1.18 builds
973c265
build(deps): bump github.com/pelletier/go-toml/v2
129e4f9
build(deps): bump github.com/pelletier/go-toml/v2
9a8603d
build(deps): bump actions/setup-go from 2 to 3
dc76f3c
build(deps): bump github.com/spf13/afero from 1.8.1 to 1.8.2
- Additional commits viewable in compare view
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/spf13/viper&package-manager=go_modules&previous-version=1.10.1&new-version=1.11.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* keymigrate: fix conversion of transaction hash keys (#8352)
* keymigrate: fix conversion of transaction hash keys
In the legacy database format, keys were generally stored with a string prefix
to partition the key space. Transaction hashes, however, were not prefixed: The
hash of a transaction was the entire key for its record.
When the key migration script scans its input, it checks the format of each
key to determine whether it has already been converted, so that it is safe to run
the script over an already-converted database.
After checking for known prefixes, the migration script used two heuristics to
distinguish ABCI events and transaction hashes: For ABCI events, whose keys
used the form "name/value/height/index", it checked for the right number of
separators. For hashes, it checked that the length is exactly 32 bytes (the
length of a SHA-256 digest) AND that the value does not contain a "/".
This last check is problematic: Any hash containing the byte 0x2f (the code
point for "/") would be incorrectly filtered out from conversion. This leads to
some transaction hashes not being converted.
To fix this problem, this changes how the script recognizes keys:
1. Use a more rigorous syntactic check to filter out ABCI metadata.
2. Use only the length to identify hashes among what remains.
This change is still not a complete fix: It is possible, though unlikely, that
a valid hash could happen to look exactly like an ABCI metadata key. However,
the chance of that happening is vastly smaller than the chance of generating a
hash that contains at least one "/" byte.
Similarly, it is possible that an already-converted key of some other type
could be mistaken for a hash (not a converted hash, ironically, but another
type of the right length). Again, we can't do anything about that.
* Update pending changelog for #8352. (#8354)
I forgot to add this before merging. 🙁
* Add a script to check documentation for ToC entries. (#8356)
This script verifies that each document in the docs and architecture directory
has a corresponding table-of-contents entry in its README file. It can be run
manually from the command line.
- Hook up this script to run in CI (optional workflow).
- Update ADR ToC to include missing entries this script found.
* build(deps): Bump async from 2.6.3 to 2.6.4 in /docs (#8357)
Bumps [async](https://github.com/caolan/async) from 2.6.3 to 2.6.4.
- [Release notes](https://github.com/caolan/async/releases)
- [Changelog](https://github.com/caolan/async/blob/v2.6.4/CHANGELOG.md)
- [Commits](https://github.com/caolan/async/compare/v2.6.3...v2.6.4)
---
updated-dependencies:
- dependency-name: async
dependency-type: indirect
...
Signed-off-by: dependabot[bot]
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* build(deps): Bump github.com/creachadair/atomicfile from 0.2.4 to 0.2.5 (#8365)
Bumps [github.com/creachadair/atomicfile](https://github.com/creachadair/atomicfile) from 0.2.4 to 0.2.5.
Commits
b8ff50e
Release v0.2.5.
95084ab
Update actions/setup-go to v3.
10d28f6
Update actions/checkout to v3.
5f1989d
Use a more explanatory temp file prefix.
7819ee5
Add Go 1.18 to the CI workflow.
c30fad6
Drop old Go versions from CI.
ebcfa6b
acat: use WriteData to simplify the code
- See full diff in compare view
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/creachadair/atomicfile&package-manager=go_modules&previous-version=0.2.4&new-version=0.2.5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* Forward port changelog from v0.35.4 to master. (#8364)
* p2p: fix setting in con-tracker (#8370)
* eventbus: publish without contexts (#8369)
* cleanup: unused parameters (#8372)
* cleanup: pin get-diff-action uses to major version only, not minor/patch (#8368)
* abci++: Sync implementation and spec for vote extensions (#8141)
* Refactor so building and linting works
This is the first step towards implementing vote extensions: generating
the relevant proto stubs and getting the build and linter to pass.
Signed-off-by: Thane Thomson
* Fix typo
Signed-off-by: Thane Thomson
* Better describe method given vote extensions
Signed-off-by: Thane Thomson
* Fix types tests
Signed-off-by: Thane Thomson
* Move CanonicalVoteExtension to canonical types proto defs
Signed-off-by: Thane Thomson
* Regenerate protos including latest PBTS synchrony params update
Signed-off-by: Thane Thomson
* Inject vote extensions into proposal
Signed-off-by: Thane Thomson
* Thread vote extensions through code and fix tests
Signed-off-by: Thane Thomson
* Remove extraneous empty value initialization
Signed-off-by: Thane Thomson
* Fix lint
Signed-off-by: Thane Thomson
* Fix missing VerifyVoteExtension request data
Signed-off-by: Thane Thomson
* Explicitly ensure length > 0 to sign vote extension
Signed-off-by: Thane Thomson
* Explicitly ensure length > 0 to sign vote extension
Signed-off-by: Thane Thomson
* Remove extraneous comment
Signed-off-by: Thane Thomson
* Update privval/file.go
Co-authored-by: M. J. Fromberger
* Update types/vote_test.go
Co-authored-by: M. J. Fromberger
* Format
Signed-off-by: Thane Thomson
* Fix ABCI proto generation scripts for Linux
Signed-off-by: Thane Thomson
* Sync intermediate and goal protos
Signed-off-by: Thane Thomson
* Update internal/consensus/common_test.go
Co-authored-by: Sergio Mena
* Use dummy value with clearer meaning
Signed-off-by: Thane Thomson
* Rewrite loop for clarity
Signed-off-by: Thane Thomson
* Panic on ABCI++ method call failure
Signed-off-by: Thane Thomson
* Add strong correctness guarantees when constructing extended commit info for ABCI++
Signed-off-by: Thane Thomson
* Add strong guarantee in extendedCommitInfo that the number of votes corresponds
Signed-off-by: Thane Thomson
* Make extendedCommitInfo function more robust
At first extendedCommitInfo expected votes to be in the same order as
their corresponding validators in the supplied CommitInfo struct, but
this proved to be rather difficult since when a validator set's loaded
from state it's first sorted by voting power and then by address.
Instead of sorting the votes in the same way, this approach simply maps
votes to their corresponding validator's address prior to constructing
the extended commit info. This way it's easy to look up the
corresponding vote and we don't need to care about vote order.
Signed-off-by: Thane Thomson
* Remove extraneous validator address assignment
Signed-off-by: Thane Thomson
* Sign over canonical vote extension
Signed-off-by: Thane Thomson
* Validate vote extension signature against canonical vote extension
Signed-off-by: Thane Thomson
* Update privval tests for more meaningful dummy value
Signed-off-by: Thane Thomson
* Add vote extension capability to E2E test app
Signed-off-by: Thane Thomson
* Disable lint for weak RNG usage for test app
Signed-off-by: Thane Thomson
* Use parseVoteExtension instead of custom parsing in PrepareProposal
Signed-off-by: Thane Thomson
* Only include extension if we have received txs
It's unclear at this point why this is necessary to ensure that the
application's local app_hash matches that committed in the previous
block.
Signed-off-by: Thane Thomson
* Require app_hash from app to match that from last block
Signed-off-by: Thane Thomson
* Add contrived (possibly flaky) test to check that vote extensions code works
Signed-off-by: Thane Thomson
* Remove workaround for problem now solved by #8229
Signed-off-by: Thane Thomson
* add tests for vote extension cases
* Fix spelling mistake to appease linter
Signed-off-by: Thane Thomson
* Collapse redundant if statement
Signed-off-by: Thane Thomson
* Formatting
Signed-off-by: Thane Thomson
* Always expect an extension signature, regardless of whether an extension is present
Signed-off-by: Thane Thomson
* Votes constructed from commits cannot include extensions or signatures
Signed-off-by: Thane Thomson
* Pass through vote extension in test helpers
Signed-off-by: Thane Thomson
* Temporarily disable vote extension signature requirement
Signed-off-by: Thane Thomson
* Expand on vote equality test errors for clarity
Signed-off-by: Thane Thomson
* Expand on vote matching error messages in testing
Signed-off-by: Thane Thomson
* Allow for selective subscription by vote type
This is an attempt to fix the intermittently failing
`TestPrepareProposalReceivesVoteExtensions` test in the internal
consensus package.
Occasionally we get prevote messages via the subscription channel, and
we're not interested in those. This change allows us to specify what
types of votes we're interested in (i.e. precommits) and discard the
rest.
Signed-off-by: Thane Thomson
* Read lock consensus state mutex in test helper to avoid data race
Signed-off-by: Thane Thomson
* Revert BlockIDFlag parameter in node test
Signed-off-by: Thane Thomson
* Perform additional check in ProcessProposal for special txs generated by vote extensions
Signed-off-by: Thane Thomson
* e2e: check that our added tx does not cause all txs to exceed req.MaxTxBytes
Signed-off-by: Thane Thomson
* Only set vote extension signatures when signing is successful
Signed-off-by: Thane Thomson
* Remove channel capacity constraint in test helper to avoid missing messages
Signed-off-by: Thane Thomson
* Add TODO to always require extension signatures in vote validation
Signed-off-by: Thane Thomson
* e2e: reject vote extensions if the request height does not match what we expect
Signed-off-by: Thane Thomson
* types: remove extraneous call to voteWithoutExtension in test
Signed-off-by: Thane Thomson
* Remove unnecessary address parameter from CanonicalVoteExtension
Signed-off-by: Thane Thomson
* privval: change test vote type to precommit since we use an extension
Signed-off-by: Thane Thomson
* privval: update signing logic to cater for vote extensions
Signed-off-by: Thane Thomson
* proto: update field descriptions for vote message
Signed-off-by: Thane Thomson
* proto: update field description for vote extension sig in vote message
Signed-off-by: Thane Thomson
* proto/types: use fixed-length 64-bit integers for rounds in CanonicalVoteExtension
Signed-off-by: Thane Thomson
* consensus: fix flaky TestPrepareProposalReceivesVoteExtensions
Signed-off-by: Thane Thomson
* consensus: remove previously added test helper functionality
Signed-off-by: Thane Thomson
* e2e: add error logs when we get an unexpected height in ExtendVote or VerifyVoteExtension requests
Signed-off-by: Thane Thomson
* node_test: get validator addresses from privvals
Signed-off-by: Thane Thomson
* privval/file_test: optimize filepv creation in tests
Signed-off-by: Thane Thomson
* privval: add test to check that vote extensions are always signed
Signed-off-by: Thane Thomson
* Add a script to check documentation for ToC entries. (#8356)
This script verifies that each document in the docs and architecture directory
has a corresponding table-of-contents entry in its README file. It can be run
manually from the command line.
- Hook up this script to run in CI (optional workflow).
- Update ADR ToC to include missing entries this script found.
* build(deps): Bump async from 2.6.3 to 2.6.4 in /docs (#8357)
Bumps [async](https://github.com/caolan/async) from 2.6.3 to 2.6.4.
- [Release notes](https://github.com/caolan/async/releases)
- [Changelog](https://github.com/caolan/async/blob/v2.6.4/CHANGELOG.md)
- [Commits](https://github.com/caolan/async/compare/v2.6.3...v2.6.4)
---
updated-dependencies:
- dependency-name: async
dependency-type: indirect
...
Signed-off-by: dependabot[bot]
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* privval/file_test: reset vote ext sig before signing
Signed-off-by: Thane Thomson
Co-authored-by: M. J. Fromberger
Co-authored-by: Sergio Mena
Co-authored-by: William Banfield
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* build(deps): Bump github.com/vektra/mockery/v2 from 2.10.6 to 2.11.0 (#8374)
Bumps [github.com/vektra/mockery/v2](https://github.com/vektra/mockery) from 2.10.6 to 2.11.0.
Release notes
Sourced from github.com/vektra/mockery/v2's releases.
v2.11.0
Changelog
- a0d98e4 Add constructor to the generated mocks
- 09de88a Fix Makefile (don't call "clean" during "all")
- eddf049 Fix import
- b4d8eef Fix panic in tests
- a328a65 Merge branch 'master' into add-constructor-for-mocks
- 32dd223 Merge pull request #406 from grongor/add-constructor-for-mocks
- 9489caf TMP-PLS-CHECK-AND-FIXUP fix rebase errors
Commits
32dd223
Merge pull request #406 from grongor/add-constructor-for-mocks
eddf049
Fix import
a328a65
Merge branch 'master' into add-constructor-for-mocks
b4d8eef
Fix panic in tests
9489caf
TMP-PLS-CHECK-AND-FIXUP fix rebase errors
09de88a
Fix Makefile (don't call "clean" during "all")
a0d98e4
Add constructor to the generated mocks
- See full diff in compare view
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/vektra/mockery/v2&package-manager=go_modules&previous-version=2.10.6&new-version=2.11.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* node: use signals rather than ephemeral contexts (#8376)
* node: cleanup setup for indexer and evidence components (#8378)
* test/fuzz: convert to Go 1.18 native fuzzing (#8359)
* rpc: reformat method signatures and use a context (#8377)
I was digging around over here, and thought it'd be good to
cleanup/standardize the line formating on a few of these methods. Also
found a few cases where we could use contexts better so did a little
bit of cleanup there too!
* Add confix testdata for Tendermint v0.30. (#8380)
Some additional testdata I grabbed while writing up the draft of RFC 019.
* abci: avoid having untracked requests in the channel (#8382)
It seems to me that by adding requests to the clients tracker (the
`reqSent` linked list), then there's no need to actually drain the
channel, becuase we will mark all of these requests as done/errored
(which propogates to users, as users never get future objects any
more), and then the GC can reap all of the request objects and the
channel accordingly.
* test/fuzz/tests: remove debug logging statement (#8385)
* Add config samples from TM v26, v27, v28, v29. (#8384)
* abci: streamline grpc application construction (#8383)
In my mind this is "don't make grpc any weirder than it has to be."
We definitely don't need to export this type: if you're using gRPC for
ABCI you *probably* don't want to also depend on the huge swath of the
code that
The ideal case is you generate the proto yourself, standup a gRPC
service on your own (presumably because your application has other
gRPC services that you want to expose,) and then your application
doesn't need to interact with the types package at all. This is
definitely the case for anyone who uses gRPC and doesn't use Go (which
is likely the predominant use case.)
If you're using Go, and want to use tendermint's service runner for
running your gRPC service, you can, but at this point (as before,)
you're already importing the `types` package (and you were before,)
I've just eliminated an intermediate type that you shouldn't need to
think about.
Reviewers: I think the change is pretty rote, but the logic/user-story
above would definitely be better for being validated by someone other
than me. :)
* RFC 019: Configuration File Versioning (#8379)
This RFC discusses issues in how we migrate configuration data across
Tendermint versions, and some options for how to improve the experience for
node operators in the future.
* build(deps): Bump github.com/creachadair/tomledit from 0.0.16 to 0.0.18 (#8392)
Bumps [github.com/creachadair/tomledit](https://github.com/creachadair/tomledit) from 0.0.16 to 0.0.18.
Commits
5802e26
Release v0.0.18
3c9daf1
document that we don't validate
da8c938
Remove non-applicable test cases.
ac4210b
parser: ensure unclosed arrays are not treated as empty
f98f82f
parser: ensure array separators are present
ea1671e
scanner: clean up some issues in escape and space handling
8168589
scanner: filter bad commas in numeric literals
83189e2
scanner: fix some issues in multiline string recognition
bdc8e22
scanner: allow space separators in date-time strings
1ab2c8d
Add compliance tests.
- Additional commits viewable in compare view
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/creachadair/tomledit&package-manager=go_modules&previous-version=0.0.16&new-version=0.0.18)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* build(deps): Bump github.com/vektra/mockery/v2 from 2.11.0 to 2.12.0 (#8393)
* abci: application type should take contexts (#8388)
* abci: Application should return errors errors and nilable response objects (#8396)
* Make encoding of HexBytes values more robust. (#8398)
The HexBytes wrapper type handles decoding byte strings from JSON. In the RPC
API, hashes are encoded as hex digits rather than the standard base64.
Simplify the implementation of this wrapper using the TextMarshaler interface,
which the encoding/json package uses for values (like these) that are meant to
be wrapped in JSON strings.
In addition, allow HexBytes values to be decoded from either hex OR base64
input. This preserves all existing use, but will allow us to remove some
reflection special cases in the RPC decoder plumbing.
Update tests to correctly tolerate empty/nil.
* abci: remove redundant methods in client (#8401)
* Remove obsolete build tagged patch for net.Pipe. (#8399)
The p2p/conn library was using a build patch to work around an old issue with
the net.Conn type that has not existed since Go 1.10. Remove the workaround and
update usage to use the standard net.Pipe directly.
* abci: remove unneccessary implementations (#8403)
* abci: interface should take pointers to arguments (#8404)
* build(deps): Bump bufbuild/buf-setup-action from 1.3.1 to 1.4.0 (#8405)
Bumps [bufbuild/buf-setup-action](https://github.com/bufbuild/buf-setup-action) from 1.3.1 to 1.4.0.
Release notes
Sourced from bufbuild/buf-setup-action's releases.
v1.4.0
- Set the default buf version to v1.4.0
Commits
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=bufbuild/buf-setup-action&package-manager=github_actions&previous-version=1.3.1&new-version=1.4.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* build(deps): Bump codecov/codecov-action from 3.0.0 to 3.1.0 (#8406)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 3.0.0 to 3.1.0.
Release notes
Sourced from codecov/codecov-action's releases.
v3.1.0
3.1.0
Features
- #699 Incorporate
xcode
arguments for the Codecov uploader
Dependencies
- #694 build(deps-dev): bump
@vercel/ncc
from 0.33.3 to 0.33.4
- #696 build(deps-dev): bump
@types/node
from 17.0.23 to 17.0.25
- #698 build(deps-dev): bump jest-junit from 13.0.0 to 13.2.0
Changelog
Sourced from codecov/codecov-action's changelog.
3.1.0
Features
- #699 Incorporate
xcode
arguments for the Codecov uploader
Dependencies
- #694 build(deps-dev): bump
@vercel/ncc
from 0.33.3 to 0.33.4
- #696 build(deps-dev): bump
@types/node
from 17.0.23 to 17.0.25
- #698 build(deps-dev): bump jest-junit from 13.0.0 to 13.2.0
Commits
81cd2dc
Merge pull request #699 from codecov/feat-xcode
a03184e
feat: add xcode support
6a6a9ae
Merge pull request #694 from codecov/dependabot/npm_and_yarn/vercel/ncc-0.33.4
92a872a
Merge pull request #696 from codecov/dependabot/npm_and_yarn/types/node-17.0.25
43a9c18
Merge pull request #698 from codecov/dependabot/npm_and_yarn/jest-junit-13.2.0
13ce822
Merge pull request #690 from codecov/ci-v3
4d6dbaa
build(deps-dev): bump jest-junit from 13.0.0 to 13.2.0
98f0f19
build(deps-dev): bump @types/node
from 17.0.23 to 17.0.25
d3021d9
build(deps-dev): bump @vercel/ncc
from 0.33.3 to 0.33.4
2c83f35
Update makefile to v3
- See full diff in compare view
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=codecov/codecov-action&package-manager=github_actions&previous-version=3.0.0&new-version=3.1.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* build(deps): Bump google.golang.org/grpc from 1.45.0 to 1.46.0 (#8408)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.45.0 to 1.46.0.
Release notes
Sourced from google.golang.org/grpc's releases.
Release 1.46.0
New Features
- server: Support setting
TCP_USER_TIMEOUT
on grpc.Server
connections using keepalive.ServerParameters.Time
(#5219)
- client: perform graceful switching of LB policies in the
ClientConn
by default (#5285)
- all: improve logging by including channelz identifier in log messages (#5192)
API Changes
- grpc: delete
WithBalancerName()
API, deprecated over 4 years ago in #1697 (#5232)
- balancer: change BuildOptions.ChannelzParentID to an opaque identifier instead of int (#5192)
- Note: the balancer package is labeled as EXPERIMENTAL, and we don't believe users were using this field.
Behavior Changes
- client: change connectivity state to
TransientFailure
in pick_first
LB policy when all addresses are removed (#5274)
- This is a minor change that brings grpc-go's behavior in line with the intended behavior and how C and Java behave.
- metadata: add client-side validation of HTTP-invalid metadata before attempting to send (#4886)
Bug Fixes
- metadata: make a copy of the value slices in FromContext() functions so that modifications won't be made to the original copy (#5267)
- client: handle invalid service configs by applying the default, if applicable (#5238)
- xds: the xds client will now apply a 1 second backoff before recreating ADS or LRS streams (#5280)
Dependencies
Commits
e8d06c5
Change version to 1.46.0 (#5296)
efbd542
gcp/observability: correctly test this module in presubmit tests (#5300) (#5307)
4467a29
gcp/observability: implement logging via binarylog (#5196)
18fdf54
cmd/protoc-gen-go-grpc: allow hooks to modify client structs and service hand...
337b815
interop: build client without timeout; add logs to help debug failures (#5294)
e583b19
xds: Add RLS in xDS e2e test (#5281)
0066bf6
grpc: perform graceful switching of LB policies in the ClientConn
by defaul...
3cccf6a
xdsclient: always backoff between new streams even after successful stream (#...
4e78093
xds: ignore routes with unsupported cluster specifiers (#5269)
99aae34
cluster manager: Add Graceful Switch functionality to Cluster Manager (#5265)
- Additional commits viewable in compare view
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=google.golang.org/grpc&package-manager=go_modules&previous-version=1.45.0&new-version=1.46.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* config: minor template infrastructure (#8411)
* crypto: remove unused code (#8412)
* abci++: Remove intermediate protos (#8414)
* Sync protos with their intermediates
Signed-off-by: Thane Thomson
* Remove intermediate protos and their supporting scripts
Signed-off-by: Thane Thomson
* make proto-gen
Signed-off-by: Thane Thomson
* build(deps): Bump github.com/vektra/mockery/v2 from 2.12.0 to 2.12.1 (#8417)
Bumps [github.com/vektra/mockery/v2](https://github.com/vektra/mockery) from 2.12.0 to 2.12.1.
Release notes
Sourced from github.com/vektra/mockery/v2's releases.
v2.12.1
Changelog
- facf60b Add extra test cases for increasing code coverage.
- 2e1360a Collapse if statements and rename interface in the fixtures.
- 8bdc90d Fix test on go1.18.
- fe03b57 Fix tests.
- b8c62f7 Fix: avoid package name collision with inPackage (#291)
- c9dc740 Merge pull request #422 from i-sevostyanov/fix-package-collision
- 58a7f18 Merge pull request #452 from grongor/refactor-first-letter-helper
- 749b2d6 Refactor mock name generation
Commits
c9dc740
Merge pull request #422 from i-sevostyanov/fix-package-collision
facf60b
Add extra test cases for increasing code coverage.
8bdc90d
Fix test on go1.18.
fe03b57
Fix tests.
2e1360a
Collapse if statements and rename interface in the fixtures.
b8c62f7
Fix: avoid package name collision with inPackage (#291)
58a7f18
Merge pull request #452 from grongor/refactor-first-letter-helper
749b2d6
Refactor mock name generation
- See full diff in compare view
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/vektra/mockery/v2&package-manager=go_modules&previous-version=2.12.0&new-version=2.12.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* build(deps): Bump github.com/google/go-cmp from 0.5.7 to 0.5.8 (#8422)
Bumps [github.com/google/go-cmp](https://github.com/google/go-cmp) from 0.5.7 to 0.5.8.
Release notes
Sourced from github.com/google/go-cmp's releases.
v0.5.8
Reporter changes:
- (#293) Fix printing of types in reporter output for interface and pointer types
- (#294) Use string formatting for slice of bytes in more circumstances
Dependency changes:
- (#292) Update minimum supported version to go1.13 and remove
xerrors
dependency
Commits
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/google/go-cmp&package-manager=go_modules&previous-version=0.5.7&new-version=0.5.8)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* fuzz: don't panic on expected errors (#8423)
In the conversion to Go 1.18 fuzzing in e4991fd862c8300254417360feb2d66c5861aa54,
a `return 0` was converted to a panic. A `return 0` is a hint to the fuzzer, not
a failing testcase.
While here, clean up the test by folding setup code into it.
* Unify RPC method signatures and parameter decoding (#8397)
Pass all parameters from JSON-RPC requests to their corresponding handlers
using struct types instead of positional parameters. This allows us to control
encoding of arguments using only the standard library, and to eliminate the
remaining special-purpose JSON encoding hooks in the server.
To support existing use, the server still allows arguments to be encoded in
JSON as either an array or an object.
Related changes:
- Rework the RPCFunc constructor to reduce reflection during RPC call service.
- Add request parameter wrappers for each RPC service method.
- Update the RPC Environment methods to use these types.
- Update the interfaces and shims derived from Environment to the new
signatures.
- Update and extend test cases.
* p2p: remove support for multiple transports and endpoints (#8420)
* node: start rpc service after reactors (#8426)
* p2p: use nodeinfo less often (#8427)
* Use patched link-checker for periodic checks. (#8430)
In #8339 we pointed the markdown link checker action to a patched version that
has the up-to-date version of the underlying check tool. In doing so, I missed
the periodic cron job that runs the same workflow. Update it to use the patched
version also.
* abci++: Vote extension cleanup (#8402)
* Split vote verification/validation based on vote extensions
Some parts of the code need vote extensions to be verified and
validated (mostly in consensus), and other parts of the code don't
because its possible that, in some cases (as per RFC 017), we won't have
vote extensions.
This explicitly facilitates that split.
Signed-off-by: Thane Thomson
* Only sign extensions in precommits, not prevotes
Signed-off-by: Thane Thomson
* Update privval/file.go
Co-authored-by: M. J. Fromberger
* Apply suggestions from code review
Co-authored-by: M. J. Fromberger
* Temporarily disable extension requirement again for E2E testing
Signed-off-by: Thane Thomson
* Reorganize comment for clarity
Signed-off-by: Thane Thomson
* Leave vote validation and pre-call nil check up to caller of VoteToProto
Signed-off-by: Thane Thomson
* Split complex vote validation test into multiple tests
Signed-off-by: Thane Thomson
* Universally enforce no vote extensions on any vote type but precommits
Signed-off-by: Thane Thomson
* Make error messages more generic
Signed-off-by: Thane Thomson
* Verify with vote extensions when constructing a VoteSet
Signed-off-by: Thane Thomson
* Expand comment for clarity
Signed-off-by: Thane Thomson
* Add extension check for prevotes prior to signing votes
Signed-off-by: Thane Thomson
* Fix supporting test code to only inject extensions into precommits
Signed-off-by: Thane Thomson
* Separate vote malleation from signing in vote tests for clarity
Signed-off-by: Thane Thomson
* Add extension signature length check and corresponding test
Signed-off-by: Thane Thomson
* Perform basic vote validation in CommitToVoteSet
Signed-off-by: Thane Thomson
Co-authored-by: M. J. Fromberger
* rpc: fix byte string decoding for URL parameters (#8431)
In #8397 I tried to remove all the cases where we needed to keep track of the
target type of parameters for JSON encoding, but there is one case still left:
When decoding parameters from URL query terms, there is no way to tell whether
or not we need base64 encoding without knowing whether the underlying type of
the target is string or []byte.
To fix this, keep track of parameters that are []byte valued when RPCFunc is
compiling its argument map, and use that when parsing URL query terms. Update
the tests accordingly.
* crypto: cleanup tmhash package (#8434)
* build(deps): Bump github.com/creachadair/tomledit from 0.0.18 to 0.0.19 (#8440)
Bumps [github.com/creachadair/tomledit](https://github.com/creachadair/tomledit) from 0.0.18 to 0.0.19.
Commits
0692e41
Release v0.0.19
d1160a4
Update default permissions.
56f28f4
Move transform tests to that package.
3b8b380
Add permissions to CI workflow.
409951b
Add a quotation test case.
f35c8be
parser: include line numbers in headings, mappings, and values
26acca1
Regularize location formatting in diagnostics.
3394f59
Add more parser test cases.
5ce10cc
Rename test file.
29f3eb3
Allow compliance tests to be skipped with -short.
- See full diff in compare view
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/creachadair/tomledit&package-manager=go_modules&previous-version=0.0.18&new-version=0.0.19)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* build(deps): Bump github.com/btcsuite/btcd from 0.22.0-beta to 0.22.1 (#8439)
Bumps [github.com/btcsuite/btcd](https://github.com/btcsuite/btcd) from 0.22.0-beta to 0.22.1.
Changelog
Sourced from github.com/btcsuite/btcd's changelog.
============================================================================
User visible changes for btcd
A full-node bitcoin implementation written in Go
Changes in 0.22.1 (Wed Apr 27 2022)
- Notable developer-related package changes:
- Update to use chaincfg/chainhash module and remove conflicting
package
- Contributors (alphabetical order):
Changes in 0.22.0 (Tue Jun 01 2021)
- Protocol and network-related changes:
- Add support for witness tx and block in notfound msg (#1625)
- Add support for receiving sendaddrv2 messages from a peer (#1670)
- Fix bug in peer package causing last block height to go backwards
(#1606)
- Add chain parameters for connecting to the public Signet network
(#1692, #1718)
- Crypto changes:
- Fix bug causing panic due to bad R and S signature components in
btcec.RecoverCompact (#1691)
- Set the name (secp256k1) in the CurveParams of the S256 curve
(#1565)
- Notable developer-related package changes:
- Remove unknown block version warning in the blockchain package,
due to false positives triggered by AsicBoost (#1463)
- Add chaincfg.RegisterHDKeyID function to populate HD key ID pairs
(#1617)
- Add new method mining.AddWitnessCommitment to add the witness
commitment as an OP_RETURN output within the coinbase transaction.
(#1716)
- RPC changes:
- Support Batch JSON-RPC in rpcclient and server (#1583)
- Add rpcclient method to invoke getdescriptorinfo JSON-RPC command
(#1578)
- Update the rpcserver handler for validateaddress JSON-RPC command to
have parity with the bitcoind 0.20.0 interface (#1613)
- Add rpcclient method to invoke getblockfilter JSON-RPC command
(#1579)
- Add signmessagewithprivkey JSON-RPC command in rpcserver (#1585)
- Add rpcclient method to invoke importmulti JSON-RPC command (#1579)
- Add watchOnly argument in rpcclient method to invoke
listtransactions JSON-RPC command (#1628)
- Update btcjson.ListTransactionsResult for compatibility with Bitcoin
Core 0.20.0 (#1626)
- Support nullable optional JSON-RPC parameters (#1594)
- Add rpcclient and server method to invoke getnodeaddresses JSON-RPC
... (truncated)
Commits
2f508b3
Update CHANGES file for 0.22.1 release.
ff92d88
btcd: bump version to v0.22.1.
cf5c461
main: Switch to chaincfg/chainhash module.
- See full diff in compare view
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/btcsuite/btcd&package-manager=go_modules&previous-version=0.22.0-beta&new-version=0.22.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* consensus: reduce size of validator set changes test (#8442)
* privval/grpc: normalize signature (#8441)
* p2p: avoid using p2p.Channel internals (#8444)
* blocksync: Honor contexts supplied to BlockPool (#8447)
* Lift condition into for loop
Signed-off-by: Thane Thomson
* Honor contexts in BlockPool
Signed-off-by: Thane Thomson
* Only stop timers when necessary
Signed-off-by: Thane Thomson
* Optimize timers
Signed-off-by: Thane Thomson
* Simplify request interval definition
Signed-off-by: Thane Thomson
* Remove extraneous timer stop
Signed-off-by: Thane Thomson
* Convert switch into if
Signed-off-by: Thane Thomson
* Eliminate timers
Signed-off-by: Thane Thomson
* PBTS: system model made more precise (#8096)
* PBTS model: precision, accuracy, and delay defs
* PBTS model: consensus properties reviewed
* PBTS model: reinforcing alignment with UTC
* PBTS model: precision parameter embodies accuracy
* PBTS model: discussion about accuracy shortened
* PBTS model: proposal time monotonocity rephrased
* PBTS model: precision, accuracy, and delay defs
* PBTS model: consensus properties reviewed
* PBTS model: reinforcing alignment with UTC
* PBTS model: precision parameter embodies accuracy
* PBTS model: discussion about accuracy shortened
* PBTS model: proposal time monotonocity rephrased
* PBTS model: Safety Invariants subsection
* PBTS model: MSGDELAY description shortened
* PBTS model: timely proposals definition refined
* PBTS model: some formatting changes
* PBTS model: timely predicate definition
* PBTS model: timely proof-of-lock re-defined
* PBTS model: derived proof-of-lock requirements
* The property needs to be properly demonstrated.
* Apply suggestions from William
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
* PBTS model: reference to arXiv algorithm on timely
* PBTS model: typos fixed
* PBTS model: derived POL "demonstration"
* PBTS model: fix formatting, r' renamed to vr
* PBTS model: minor fixes
* PBTS model: derived POL proof ammended
* PBTS safety: consensus validity with time inequalty
* PBTS: renamed receiveTime to proposalReceptionTime
* PBTS safety: short intro, some links
* PBTS model: safety refactored again
* PBTS model: liveness condition stated
* PBTS liveness: minor change
* Update spec/consensus/proposer-based-timestamp/pbts-sysmodel_002_draft.md
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
* Update spec/consensus/proposer-based-timestamp/pbts-sysmodel_002_draft.md
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
* Update spec/consensus/proposer-based-timestamp/pbts-sysmodel_002_draft.md
* Update spec/consensus/proposer-based-timestamp/pbts-sysmodel_002_draft.md
Co-authored-by: Josef Widder <44643235+josef-widder@users.noreply.github.com>
* Update spec/consensus/proposer-based-timestamp/pbts-sysmodel_002_draft.md
Co-authored-by: Josef Widder <44643235+josef-widder@users.noreply.github.com>
* Update spec/consensus/proposer-based-timestamp/pbts-sysmodel_002_draft.md
Co-authored-by: Josef Widder <44643235+josef-widder@users.noreply.github.com>
* PBTS sysmodel: formmatting typo fixed
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
Co-authored-by: Josef Widder <44643235+josef-widder@users.noreply.github.com>
* build(deps): Bump docker/setup-buildx-action from 1.6.0 to 1.7.0 (#8451)
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 1.6.0 to 1.7.0.
Release notes
Sourced from docker/setup-buildx-action's releases.
v1.7.0
- Standalone mode by
@crazy-max
in (#119)
- Update dev dependencies and workflow by
@crazy-max
(#114 #130)
- Bump tmpl from 1.0.4 to 1.0.5 (#108)
- Bump ansi-regex from 5.0.0 to 5.0.1 (#109)
- Bump
@actions/core
from 1.5.0 to 1.6.0 (#110)
- Bump actions/checkout from 2 to 3 (#126)
- Bump
@actions/tool-cache
from 1.7.1 to 1.7.2 (#128)
- Bump
@actions/exec
from 1.1.0 to 1.1.1 (#129)
- Bump minimist from 1.2.5 to 1.2.6 (#132)
- Bump codecov/codecov-action from 2 to 3 (#133)
- Bump semver from 7.3.5 to 7.3.7 (#136)
Commits
f211e3e
Merge pull request #136 from docker/dependabot/npm_and_yarn/semver-7.3.7
b23216e
Update generated content
be7e600
Bump semver from 7.3.5 to 7.3.7
7117987
Merge pull request #119 from crazy-max/standalone
17ebdd4
ci: add jobs to check standalone behavior
3472856
support standalone mode and display version
74283ca
Merge pull request #133 from docker/dependabot/github_actions/codecov/codecov...
5b77ad4
Bump codecov/codecov-action from 2 to 3
2a6fbda
Merge pull request #132 from docker/dependabot/npm_and_yarn/minimist-1.2.6
03815bd
Bump minimist from 1.2.5 to 1.2.6
- Additional commits viewable in compare view
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/setup-buildx-action&package-manager=github_actions&previous-version=1.6.0&new-version=1.7.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
* fix: first part of modification after merge
* fix: eliminate compile level issues
* fix: unit tests in abci/example/kvstore
* fix: unit tests in dash/quorum package
* fix: deadlock at types.MockPV
* fix: blocksync package
* fix: evidence package
* fix: made some fixes/improvements
* fix: change a payload hash of a message vote
* fix: remove using a mutex in processPeerUpdate to fix a deadlock
* fix: remove double incrementing
* fix: some modifications for fixing unit tests
* fix: modify TestVoteString
* fix: some fixes / improvements
* fix: some fixes / improvements
* fix: override genesis time for pbst tests
* fix: pbst tests
* fix: disable checking duplicate votes
* fix: use the current time always when making proposal block
* fix: consensus state tests
* fix: consensus state tests
* fix: consensus state tests
* fix: the tests inside state package
* fix: node tests
* fix: add custom marshalling/unmarshalling for coretypes.ResultValidators
* fix: add checking on nil in Vote.MarshalZerologObject
* fix: light client tests
* fix: rpc tests
* fix: remove duplicate test TestApp_Height
* fix: add mutex for transport_mconn.go
* fix: add required option "create-proof-block-range" in a config testdata
* chore: remove printing debug stacktrace for a duplicate vote
* fix: type error in generateDuplicateVoteEvidence
* fix: use thread safe way for interacting with consensus state
* chore: remove redundant mock cons_sync_reactor.go
* fix: use a normal time ticker for some consensus unit tests
* fix: e2e tests
* fix: lint issues
* fix: abci-cli
* fix: detected data race
* chore: remove github CI docs-toc.yml workflow
* chore: refactor e2e initialization
* test(cmd): use correct home path in TestRootConfig
* refactor(node): Simplify priv validator initialization code
* fix(node): proTxHash not correctly initialized
* chore(node): fix whitespace and comments
* refactor: add some modifications by RP feedback
* fix: proto lint
* fix: reuse setValSetUpdate to update validator index and validator-set-updates item in a storage
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
Co-authored-by: Sam Kleinman
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: M. J. Fromberger
Co-authored-by: Manuel Bravo
Co-authored-by: Sergio Mena
Co-authored-by: Marko
Co-authored-by: Thane Thomson
Co-authored-by: JayT106
Co-authored-by: frog power 4000
Co-authored-by: Jordi Pinyana
Co-authored-by: M. J. Fromberger
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
Co-authored-by: Simon Kirillov
Co-authored-by: Simon Kirillov
Co-authored-by: elias-orijtech <103319121+elias-orijtech@users.noreply.github.com>
Co-authored-by: Chill Validation <92176880+chillyvee@users.noreply.github.com>
Co-authored-by: John Adler
Co-authored-by: Callum Waters
Co-authored-by: Ismail Khoffi
Co-authored-by: William Banfield
Co-authored-by: Daniel
Co-authored-by: Josef Widder <44643235+josef-widder@users.noreply.github.com>
Co-authored-by: Lukasz Klimek <842586+lklimek@users.noreply.github.com>
---
.github/ISSUE_TEMPLATE/proposal.md | 37 +
.github/dependabot.yml | 5 +-
.github/workflows/build.yml | 15 +-
.github/workflows/docker.yml | 4 +-
.github/workflows/e2e-manual.yml | 12 +-
.github/workflows/e2e-nightly-34x.yml | 8 +-
.github/workflows/e2e-nightly-35x.yml | 75 +
.github/workflows/e2e-nightly-master.yml | 17 +-
.github/workflows/e2e.yml | 6 +-
.github/workflows/fuzz-nightly.yml | 29 +-
.github/workflows/jepsen.yml | 4 +-
.github/workflows/linkchecker.yml | 4 +-
.github/workflows/lint.yml | 22 +-
.github/workflows/linter.yml | 4 +-
.github/workflows/markdown-links.yml | 23 +
.github/workflows/proto-docker.yml | 51 -
.github/workflows/proto-lint.yml | 21 +
.github/workflows/proto.yml | 23 -
.github/workflows/release.yml | 8 +-
.github/workflows/stale.yml | 2 +-
.github/workflows/tests.yml | 22 +-
.gitignore | 8 +
.markdownlint.yml | 11 +
.md-link-check.json | 6 +
CHANGELOG_PENDING.md | 61 +
CODE_OF_CONDUCT.md | 2 +-
CONTRIBUTING.md | 172 +-
DOCKER/.gitignore | 1 -
DOCKER/Dockerfile.build_c-amazonlinux | 27 -
DOCKER/Dockerfile.testing | 16 -
DOCKER/Makefile | 13 -
DOCKER/README.md | 2 +-
DOCKER/build.sh | 20 -
DOCKER/push.sh | 22 -
Makefile | 55 +-
README.md | 61 +-
RELEASES.md | 207 +
SECURITY.md | 4 +-
UPGRADING.md | 179 +-
abci/README.md | 2 +-
abci/client/client.go | 129 +-
abci/client/creators.go | 35 -
abci/client/doc.go | 17 +-
abci/client/grpc_client.go | 470 +-
abci/client/local_client.go | 339 +-
abci/client/mocks/client.go | 576 +-
abci/client/socket_client.go | 450 +-
abci/client/socket_client_test.go | 127 -
abci/cmd/abci-cli/abci-cli.go | 341 +-
abci/example/counter/counter.go | 81 +-
abci/example/example_test.go | 160 +-
abci/example/kvstore/README.md | 2 +-
abci/example/kvstore/helpers.go | 7 +-
abci/example/kvstore/kvstore.go | 399 +-
abci/example/kvstore/kvstore_test.go | 285 +-
abci/example/kvstore/persistent_kvstore.go | 233 +-
abci/server/grpc_server.go | 62 +-
abci/server/server.go | 7 +-
abci/server/socket_server.go | 319 +-
abci/tests/client_server_test.go | 25 +-
abci/tests/server/client.go | 48 +-
abci/tests/test_cli/ex1.abci | 4 +-
abci/tests/test_cli/ex1.abci.out | 18 +-
abci/tests/test_cli/ex2.abci | 6 +-
abci/tests/test_cli/ex2.abci.out | 10 +-
abci/types/application.go | 191 +-
abci/types/client.go | 1 -
abci/types/messages.go | 122 +-
abci/types/messages_test.go | 20 +-
abci/types/mocks/application.go | 349 +
abci/types/{result.go => types.go} | 74 +
abci/types/types.pb.go | 19209 ++++++++++------
abci/types/types_test.go | 74 +
abci/version/version.go | 9 -
buf.gen.yaml | 18 +-
buf.work.yaml | 3 +
cmd/priv_val_server/main.go | 30 +-
cmd/tenderdash/commands/completion.go | 46 +
cmd/tenderdash/commands/debug/debug.go | 34 +-
cmd/tenderdash/commands/debug/dump.go | 140 +-
cmd/tenderdash/commands/debug/io.go | 3 +-
cmd/tenderdash/commands/debug/kill.go | 155 +-
cmd/tenderdash/commands/debug/util.go | 18 +-
cmd/tenderdash/commands/gen_node_key.go | 4 +-
cmd/tenderdash/commands/gen_validator.go | 47 +-
cmd/tenderdash/commands/init.go | 85 +-
cmd/tenderdash/commands/inspect.go | 70 +-
cmd/tenderdash/commands/key_migrate.go | 16 +-
cmd/tenderdash/commands/light.go | 332 +-
cmd/tenderdash/commands/probe_upnp.go | 32 -
cmd/tenderdash/commands/reindex_event.go | 153 +-
cmd/tenderdash/commands/reindex_event_test.go | 47 +-
cmd/tenderdash/commands/replay.go | 37 +-
cmd/tenderdash/commands/reset.go | 179 +
.../commands/reset_priv_validator.go | 94 -
cmd/tenderdash/commands/reset_test.go | 62 +
cmd/tenderdash/commands/rollback.go | 33 +-
cmd/tenderdash/commands/root.go | 87 +-
cmd/tenderdash/commands/root_test.go | 156 +-
cmd/tenderdash/commands/run_node.go | 105 +-
cmd/tenderdash/commands/show_node_id.go | 28 +-
cmd/tenderdash/commands/show_validator.go | 110 +-
cmd/tenderdash/commands/testnet.go | 419 +-
cmd/tenderdash/main.go | 66 +-
config/config.go | 482 +-
config/config_test.go | 110 +-
config/db.go | 4 +-
config/toml.go | 268 +-
config/toml_test.go | 26 +-
crypto/README.md | 2 +-
crypto/bls12381/bls12381.go | 27 +-
crypto/crypto.go | 85 +-
crypto/crypto_test.go | 17 +
crypto/ed25519/bench_test.go | 1 +
crypto/ed25519/ed25519.go | 26 +-
crypto/ed25519/ed25519_test.go | 2 +-
crypto/encoding/codec.go | 10 +-
crypto/example_test.go | 28 -
crypto/hash.go | 11 -
crypto/merkle/hash.go | 8 +-
crypto/merkle/proof.go | 18 +-
crypto/merkle/proof_key_path_test.go | 4 +-
crypto/merkle/proof_test.go | 28 +-
crypto/merkle/proof_value.go | 11 +-
crypto/merkle/rfc6962_test.go | 10 +-
crypto/merkle/tree_test.go | 8 +-
crypto/random.go | 17 +-
crypto/secp256k1/secp256k1.go | 66 +-
crypto/secp256k1/secp256k1_test.go | 2 +-
crypto/tmhash/hash.go | 65 -
crypto/tmhash/hash_test.go | 48 -
crypto/version.go | 3 -
crypto/xchacha20poly1305/vector_test.go | 122 -
crypto/xchacha20poly1305/xchachapoly.go | 259 -
crypto/xchacha20poly1305/xchachapoly_test.go | 113 -
crypto/xsalsa20symmetric/symmetric.go | 54 -
crypto/xsalsa20symmetric/symmetric_test.go | 40 -
{dashcore/rpc => dash/core}/client.go | 18 +-
{dashcore/rpc => dash/core}/mock.go | 14 +-
dash/llmq/llmq.go | 3 +-
dash/quorum/mock/dash_dialer.go | 2 +-
dash/quorum/selectpeers/dip6.go | 2 +-
dash/quorum/selectpeers/sortable_validator.go | 4 +-
.../selectpeers/sorted_validator_list.go | 2 +-
dash/quorum/validator_conn_executor.go | 112 +-
dash/quorum/validator_conn_executor_test.go | 194 +-
docs/DOCS_README.md | 6 +-
docs/README.md | 2 +-
docs/app-dev/abci-cli.md | 50 +-
docs/app-dev/app-architecture.md | 2 +-
docs/app-dev/getting-started.md | 50 +-
docs/app-dev/indexing-transactions.md | 2 +-
docs/app-dev/readme.md | 5 +-
docs/architecture/adr-073-libp2p.md | 235 +
docs/architecture/adr-074-timeout-params.md | 203 +
docs/architecture/adr-075-rpc-subscription.md | 684 +
.../architecture/adr-076-combine-spec-repo.md | 112 +
docs/architecture/adr-077-block-retention.md | 109 +
docs/architecture/adr-078-nonzero-genesis.md | 82 +
.../adr-079-ed25519-verification.md | 57 +
docs/architecture/adr-080-reverse-sync.md | 203 +
docs/architecture/adr-081-protobuf-mgmt.md | 201 +
docs/introduction/architecture.md | 2 +-
docs/introduction/what-is-tendermint.md | 18 +-
docs/networks/README.md | 16 -
docs/nodes/README.md | 2 +-
docs/nodes/configuration.md | 309 +-
docs/nodes/logging.md | 4 +-
docs/nodes/metrics.md | 1 +
docs/nodes/remote-signer.md | 2 +-
docs/nodes/validators.md | 6 +-
docs/package-lock.json | 48 +-
docs/pre.sh | 1 +
docs/presubmit.sh | 39 +
docs/rfc/images/abci++.png | Bin 0 -> 2792638 bytes
docs/rfc/images/abci.png | Bin 0 -> 2644284 bytes
docs/rfc/rfc-006-event-subscription.md | 204 +
docs/rfc/rfc-007-deterministic-proto-bytes.md | 140 +
docs/rfc/rfc-008-do-not-panic.md | 139 +
.../rfc-009-consensus-parameter-upgrades.md | 128 +
docs/rfc/rfc-010-p2p-light-client.rst | 145 +
docs/rfc/rfc-011-delete-gas.md | 162 +
docs/rfc/rfc-012-custom-indexing.md | 352 +
docs/rfc/rfc-013-abci++.md | 253 +
docs/rfc/rfc-014-semantic-versioning.md | 94 +
docs/rfc/rfc-015-abci++-tx-mutation.md | 261 +
docs/rfc/rfc-019-config-version.md | 400 +
docs/roadmap/roadmap.md | 32 +-
docs/tendermint-core/README.md | 5 +-
docs/tendermint-core/block-structure.md | 2 +-
.../block-sync/img/block-retention.png | Bin 0 -> 53718 bytes
docs/tendermint-core/consensus/README.md | 2 +-
.../consensus/proposer-based-timestamps.md | 95 +
docs/tendermint-core/light-client.md | 4 +-
docs/tendermint-core/rpc.md | 2 +-
docs/tendermint-core/subscription.md | 54 +-
docs/tendermint-core/using-tendermint.md | 4 +-
docs/tools/README.md | 9 +-
.../proposer-based-timestamps-runbook.md | 216 +
docs/{networks => tools}/docker-compose.md | 0
docs/tools/remote-signer-validation.md | 156 -
.../terraform-and-ansible.md | 2 +-
docs/tutorials/go-built-in.md | 99 +-
docs/tutorials/go.md | 50 +-
docs/tutorials/readme.md | 2 +-
go.mod | 127 +-
go.sum | 228 +-
internal/blocksync/doc.go | 11 +-
internal/blocksync/{v0 => }/pool.go | 158 +-
internal/blocksync/{v0 => }/pool_test.go | 58 +-
internal/blocksync/{v0 => }/reactor.go | 502 +-
internal/blocksync/{v0 => }/reactor_test.go | 160 +-
.../blocksync/v2/internal/behavior/doc.go | 42 -
.../v2/internal/behavior/peer_behaviour.go | 47 -
.../v2/internal/behavior/reporter.go | 87 -
.../v2/internal/behavior/reporter_test.go | 205 -
internal/blocksync/v2/io.go | 187 -
internal/blocksync/v2/metrics.go | 125 -
internal/blocksync/v2/processor.go | 194 -
internal/blocksync/v2/processor_context.go | 117 -
internal/blocksync/v2/processor_test.go | 305 -
internal/blocksync/v2/reactor.go | 650 -
internal/blocksync/v2/reactor_test.go | 540 -
internal/blocksync/v2/routine.go | 166 -
internal/blocksync/v2/routine_test.go | 163 -
internal/blocksync/v2/scheduler.go | 711 -
internal/blocksync/v2/scheduler_test.go | 2253 --
internal/blocksync/v2/types.go | 65 -
internal/consensus/README.md | 3 -
internal/consensus/byzantine_test.go | 461 +-
internal/consensus/common_test.go | 915 +-
internal/consensus/core_chainlock_test.go | 146 +-
internal/consensus/invalid_test.go | 95 +-
internal/consensus/mempool_test.go | 223 +-
internal/consensus/metrics.go | 61 +-
internal/consensus/mocks/cons_sync_reactor.go | 28 -
internal/consensus/mocks/fast_sync_reactor.go | 1 +
internal/consensus/msgs.go | 80 +-
internal/consensus/msgs_test.go | 14 +-
internal/consensus/pbts_test.go | 506 +
internal/consensus/peer_state.go | 49 +-
internal/consensus/peer_state_test.go | 101 +
internal/consensus/reactor.go | 941 +-
internal/consensus/reactor_test.go | 476 +-
internal/consensus/replay.go | 229 +-
internal/consensus/replay_file.go | 191 +-
internal/consensus/replay_stubs.go | 49 +-
internal/consensus/replay_test.go | 1071 +-
internal/consensus/state.go | 1433 +-
internal/consensus/state_test.go | 2849 ++-
internal/consensus/ticker.go | 43 +-
internal/consensus/types/height_vote_set.go | 12 +-
.../consensus/types/height_vote_set_test.go | 56 +-
.../consensus/types/peer_round_state_test.go | 1 +
internal/consensus/types/round_state.go | 28 +-
internal/consensus/wal.go | 77 +-
internal/consensus/wal_generator.go | 140 +-
internal/consensus/wal_test.go | 170 +-
internal/eventbus/event_bus.go | 200 +
internal/eventbus/event_bus_test.go | 559 +
internal/eventlog/cursor/cursor.go | 100 +
internal/eventlog/cursor/cursor_test.go | 141 +
internal/eventlog/eventlog.go | 217 +
internal/eventlog/eventlog_test.go | 222 +
internal/eventlog/item.go | 78 +
internal/eventlog/metrics.go | 39 +
internal/eventlog/prune.go | 111 +
internal/evidence/doc.go | 2 +-
internal/evidence/metrics.go | 47 +
internal/evidence/mocks/block_store.go | 13 +
internal/evidence/pool.go | 110 +-
internal/evidence/pool_test.go | 230 +-
internal/evidence/reactor.go | 250 +-
internal/evidence/reactor_test.go | 173 +-
internal/evidence/verify.go | 5 +-
internal/evidence/verify_test.go | 67 +-
internal/inspect/inspect.go | 52 +-
internal/inspect/inspect_test.go | 85 +-
internal/inspect/rpc/rpc.go | 59 +-
internal/jsontypes/jsontypes.go | 121 +
internal/jsontypes/jsontypes_test.go | 188 +
{libs => internal/libs}/async/async.go | 0
{libs => internal/libs}/async/async_test.go | 0
internal/libs/autofile/autofile.go | 128 +-
internal/libs/autofile/autofile_test.go | 32 +-
internal/libs/autofile/cmd/logjack.go | 81 +-
internal/libs/autofile/group.go | 113 +-
internal/libs/autofile/group_test.go | 84 +-
internal/libs/clist/bench_test.go | 2 +-
internal/libs/clist/clist.go | 120 +-
internal/libs/clist/clist_test.go | 34 +-
internal/libs/fail/fail.go | 40 -
internal/libs/flowrate/README.md | 10 -
internal/libs/flowrate/flowrate.go | 47 +-
internal/libs/flowrate/io.go | 133 -
internal/libs/flowrate/io_test.go | 197 -
internal/libs/flowrate/util.go | 13 +-
internal/libs/protoio/io_test.go | 13 +-
internal/libs/protoio/writer_test.go | 17 +-
internal/libs/queue/queue.go | 232 +
internal/libs/queue/queue_test.go | 194 +
internal/libs/sync/closer.go | 31 -
internal/libs/sync/closer_test.go | 28 -
internal/libs/sync/deadlock.go | 18 -
internal/libs/sync/sync.go | 16 -
internal/libs/tempfile/tempfile.go | 5 +-
internal/libs/tempfile/tempfile_test.go | 23 +-
internal/libs/timer/throttle_timer.go | 12 +-
internal/libs/timer/throttle_timer_test.go | 5 +-
internal/mempool/cache.go | 4 +-
internal/mempool/ids.go | 28 +-
internal/mempool/ids_test.go | 70 +-
internal/mempool/mempool.go | 904 +-
internal/mempool/mempool_bench_test.go | 50 +
internal/mempool/{v1 => }/mempool_test.go | 250 +-
internal/mempool/mock/mempool.go | 45 -
internal/mempool/mocks/mempool.go | 184 +
internal/mempool/{v1 => }/priority_queue.go | 7 +-
.../mempool/{v1 => }/priority_queue_test.go | 2 +-
internal/mempool/{v1 => }/reactor.go | 251 +-
internal/mempool/reactor_test.go | 422 +
internal/mempool/tx.go | 276 +
internal/mempool/{v1 => }/tx_test.go | 3 +-
internal/mempool/types.go | 146 +
internal/mempool/v0/bench_test.go | 107 -
internal/mempool/v0/cache_test.go | 83 -
internal/mempool/v0/clist_mempool.go | 698 -
internal/mempool/v0/clist_mempool_test.go | 687 -
internal/mempool/v0/doc.go | 23 -
internal/mempool/v0/reactor.go | 402 -
internal/mempool/v0/reactor_test.go | 392 -
internal/mempool/v1/mempool.go | 887 -
internal/mempool/v1/mempool_bench_test.go | 32 -
internal/mempool/v1/reactor_test.go | 146 -
internal/mempool/v1/tx.go | 281 -
internal/p2p/README.md | 2 +-
internal/p2p/address.go | 8 +-
internal/p2p/address_test.go | 41 +-
internal/p2p/base_reactor.go | 74 -
internal/p2p/channel.go | 212 +
internal/p2p/channel_test.go | 221 +
internal/p2p/conn/conn_go110.go | 16 -
internal/p2p/conn/conn_notgo110.go | 33 -
internal/p2p/conn/connection.go | 355 +-
internal/p2p/conn/connection_test.go | 344 +-
internal/p2p/conn/secret_connection.go | 8 +-
internal/p2p/conn/secret_connection_test.go | 20 +-
internal/p2p/conn_set.go | 82 -
internal/p2p/conn_tracker.go | 3 +-
internal/p2p/conn_tracker_test.go | 11 +
internal/p2p/errors.go | 10 +-
internal/p2p/metrics_test.go | 1 +
internal/p2p/mock/peer.go | 70 -
internal/p2p/mock/reactor.go | 25 -
internal/p2p/mocks/connection.go | 96 +-
internal/p2p/mocks/peer.go | 334 -
internal/p2p/mocks/transport.go | 71 +-
internal/p2p/netaddress.go | 11 -
internal/p2p/p2p_test.go | 10 +-
internal/p2p/p2ptest/network.go | 146 +-
internal/p2p/p2ptest/require.go | 129 +-
internal/p2p/p2ptest/util.go | 1 +
internal/p2p/peer.go | 383 -
internal/p2p/peer_set.go | 149 -
internal/p2p/peer_set_test.go | 189 -
internal/p2p/peer_test.go | 239 -
internal/p2p/peermanager.go | 115 +-
internal/p2p/peermanager_scoring_test.go | 35 +-
internal/p2p/peermanager_test.go | 226 +-
internal/p2p/pex/addrbook.go | 948 -
internal/p2p/pex/addrbook_test.go | 777 -
internal/p2p/pex/bench_test.go | 24 -
internal/p2p/pex/doc.go | 9 +-
internal/p2p/pex/errors.go | 89 -
internal/p2p/pex/file.go | 83 -
internal/p2p/pex/known_address.go | 141 -
internal/p2p/pex/params.go | 55 -
internal/p2p/pex/pex_reactor.go | 886 -
internal/p2p/pex/pex_reactor_test.go | 682 -
internal/p2p/pex/reactor.go | 499 +-
internal/p2p/pex/reactor_test.go | 480 +-
internal/p2p/pqueue.go | 77 +-
internal/p2p/pqueue_test.go | 16 +-
internal/p2p/queue.go | 30 +-
internal/p2p/router.go | 426 +-
internal/p2p/router_filter_test.go | 6 +-
internal/p2p/router_init_test.go | 29 +-
internal/p2p/router_test.go | 465 +-
internal/p2p/shim.go | 341 -
internal/p2p/shim_test.go | 210 -
internal/p2p/switch.go | 1064 -
internal/p2p/switch_test.go | 937 -
internal/p2p/test_util.go | 296 -
internal/p2p/transport.go | 62 +-
internal/p2p/transport_mconn.go | 189 +-
internal/p2p/transport_mconn_test.go | 79 +-
internal/p2p/transport_memory.go | 152 +-
internal/p2p/transport_memory_test.go | 3 +-
internal/p2p/transport_test.go | 260 +-
internal/p2p/trust/config.go | 55 -
internal/p2p/trust/metric.go | 412 -
internal/p2p/trust/metric_test.go | 118 -
internal/p2p/trust/store.go | 220 -
internal/p2p/trust/store_test.go | 163 -
internal/p2p/trust/ticker.go | 62 -
internal/p2p/types.go | 2 +-
internal/p2p/upnp/probe.go | 111 -
internal/p2p/upnp/upnp.go | 404 -
internal/p2p/wdrr_queue.go | 287 -
internal/p2p/wdrr_queue_test.go | 208 -
internal/proxy/app_conn.go | 250 -
internal/proxy/app_conn_test.go | 186 -
internal/proxy/client.go | 197 +-
internal/proxy/client_test.go | 235 +
internal/proxy/mocks/app_conn_consensus.go | 135 +-
internal/proxy/mocks/app_conn_mempool.go | 61 +-
internal/proxy/mocks/app_conn_query.go | 12 +-
internal/proxy/mocks/app_conn_snapshot.go | 16 +-
internal/proxy/multi_app_conn.go | 202 -
internal/proxy/multi_app_conn_test.go | 94 -
internal/pubsub/example_test.go | 34 +
internal/pubsub/pubsub.go | 421 +
internal/pubsub/pubsub_test.go | 482 +
{libs => internal}/pubsub/query/bench_test.go | 7 +-
{libs => internal}/pubsub/query/query.go | 27 +-
{libs => internal}/pubsub/query/query_test.go | 18 +-
{libs => internal}/pubsub/query/syntax/doc.go | 0
.../pubsub/query/syntax/parser.go | 0
.../pubsub/query/syntax/scanner.go | 0
.../pubsub/query/syntax/syntax_test.go | 2 +-
internal/pubsub/subindex.go | 117 +
internal/pubsub/subscription.go | 90 +
internal/rpc/core/abci.go | 26 +-
internal/rpc/core/blocks.go | 91 +-
internal/rpc/core/blocks_test.go | 28 +-
internal/rpc/core/consensus.go | 99 +-
internal/rpc/core/dev.go | 5 +-
internal/rpc/core/env.go | 204 +-
internal/rpc/core/events.go | 256 +-
internal/rpc/core/evidence.go | 18 +-
internal/rpc/core/health.go | 5 +-
internal/rpc/core/mempool.go | 193 +-
internal/rpc/core/net.go | 131 +-
internal/rpc/core/net_test.go | 89 -
internal/rpc/core/routes.go | 141 +-
internal/rpc/core/status.go | 32 +-
internal/rpc/core/tx.go | 56 +-
internal/state/errors.go | 8 +-
internal/state/execution.go | 543 +-
internal/state/execution_test.go | 749 +-
internal/state/export_test.go | 31 -
internal/state/helpers_test.go | 226 +-
internal/state/indexer/block/kv/kv.go | 32 +-
internal/state/indexer/block/kv/kv_test.go | 56 +-
internal/state/indexer/block/kv/util.go | 3 +-
internal/state/indexer/block/null/null.go | 2 +-
internal/state/indexer/eventsink.go | 2 +-
internal/state/indexer/indexer.go | 24 +-
internal/state/indexer/indexer_service.go | 178 +-
.../state/indexer/indexer_service_test.go | 71 +-
internal/state/indexer/mocks/event_sink.go | 14 +-
internal/state/indexer/query_range.go | 2 +-
internal/state/indexer/sink/kv/kv.go | 14 +-
internal/state/indexer/sink/kv/kv_test.go | 58 +-
internal/state/indexer/sink/null/null.go | 2 +-
internal/state/indexer/sink/null/null_test.go | 16 +-
internal/state/indexer/sink/psql/psql.go | 10 +-
internal/state/indexer/sink/psql/psql_test.go | 66 +-
internal/state/indexer/tx/kv/kv.go | 8 +-
internal/state/indexer/tx/kv/kv_bench_test.go | 10 +-
internal/state/indexer/tx/kv/kv_test.go | 18 +-
internal/state/indexer/tx/null/null.go | 2 +-
internal/state/mocks/block_store.go | 12 +
internal/state/mocks/event_sink.go | 3 +-
internal/state/mocks/evidence_pool.go | 41 +-
internal/state/mocks/store.go | 12 +
internal/state/rollback_test.go | 2 +
internal/state/services.go | 16 +-
internal/state/state.go | 55 +-
internal/state/state_test.go | 317 +-
internal/state/store.go | 49 +-
internal/state/store_test.go | 54 +-
internal/state/test/factory/block.go | 56 +-
internal/state/time.go | 46 -
internal/state/time_test.go | 57 -
internal/state/tx_filter.go | 81 +-
internal/state/tx_filter_test.go | 2 +-
internal/state/validation.go | 13 +-
internal/state/validation_test.go | 239 +-
internal/statesync/block_queue.go | 2 +-
internal/statesync/block_queue_test.go | 46 +-
internal/statesync/chunks.go | 11 +-
internal/statesync/chunks_test.go | 11 +-
internal/statesync/dispatcher.go | 56 +-
internal/statesync/dispatcher_test.go | 92 +-
internal/statesync/mocks/state_provider.go | 12 +
internal/statesync/reactor.go | 616 +-
internal/statesync/reactor_test.go | 343 +-
internal/statesync/snapshots.go | 4 +-
internal/statesync/stateprovider.go | 37 +-
internal/statesync/syncer.go | 101 +-
internal/statesync/syncer_test.go | 246 +-
internal/store/store.go | 8 +-
internal/store/store_test.go | 151 +-
internal/test/factory/block.go | 26 +-
internal/test/factory/commit.go | 45 +-
internal/test/factory/factory_test.go | 6 +-
internal/test/factory/genesis.go | 12 +-
internal/test/factory/p2p.go | 21 +-
internal/test/factory/params.go | 22 +
internal/test/factory/tx.go | 13 +-
internal/test/factory/vote.go | 8 +-
libs/bits/bit_array.go | 51 +-
libs/bits/bit_array_test.go | 32 +-
libs/bytes/bytes.go | 58 +-
libs/bytes/bytes_test.go | 6 +-
libs/cli/helper.go | 112 +-
libs/cli/setup.go | 106 +-
libs/cli/setup_test.go | 114 +-
libs/cmap/cmap.go | 91 -
libs/cmap/cmap_test.go | 113 -
libs/events/Makefile | 9 -
libs/events/README.md | 193 -
libs/events/event_cache.go | 37 -
libs/events/event_cache_test.go | 41 -
libs/events/events.go | 163 +-
libs/events/events_test.go | 499 +-
libs/json/decoder.go | 278 -
libs/json/decoder_test.go | 151 -
libs/json/doc.go | 99 -
libs/json/encoder.go | 254 -
libs/json/encoder_test.go | 104 -
libs/json/helpers_test.go | 91 -
libs/json/structs.go | 88 -
libs/json/types.go | 109 -
libs/log/default.go | 50 +-
libs/log/default_test.go | 6 +-
libs/log/nop.go | 3 +-
libs/log/testing.go | 62 +-
libs/math/safemath.go | 39 +-
libs/os/os.go | 27 -
libs/os/os_test.go | 89 +-
libs/pubsub/example_test.go | 42 -
libs/pubsub/pubsub.go | 527 -
libs/pubsub/pubsub_test.go | 573 -
libs/pubsub/subscription.go | 112 -
libs/rand/random.go | 43 -
libs/service/service.go | 251 +-
libs/service/service_test.go | 148 +-
libs/strings/string.go | 50 +-
libs/strings/string_test.go | 64 +-
libs/sync/atomic_bool.go | 33 -
libs/sync/atomic_bool_test.go | 27 -
libs/time/mocks/source.go | 40 +
libs/time/time.go | 14 +
light/client.go | 102 +-
light/client_benchmark_test.go | 32 +-
light/client_test.go | 1053 +-
light/doc.go | 4 +-
light/example_test.go | 63 +-
light/helpers_test.go | 67 +-
light/light_test.go | 104 +-
light/provider/errors.go | 10 +-
light/provider/http/http.go | 11 +-
light/provider/http/http_test.go | 36 +-
light/provider/mocks/provider.go | 26 +
light/provider/provider.go | 4 +
light/proxy/proxy.go | 25 +-
light/proxy/routes.go | 316 +-
light/rpc/client.go | 118 +-
light/rpc/mocks/light_client.go | 28 +
light/setup.go | 4 +-
light/store/db/db.go | 4 +-
light/store/db/db_test.go | 44 +-
networks/local/README.md | 2 +-
node/node.go | 1403 +-
node/node_test.go | 484 +-
node/public.go | 31 +-
node/seed.go | 162 +
node/setup.go | 850 +-
privval/dash_core_mock_signer_server.go | 2 +-
privval/dash_core_signer_client.go | 62 +-
privval/file.go | 297 +-
privval/file_test.go | 360 +-
privval/grpc/client.go | 3 +-
privval/grpc/client_test.go | 60 +-
privval/grpc/server.go | 12 +-
privval/grpc/server_test.go | 37 +-
privval/grpc/util.go | 14 +-
privval/msgs_test.go | 35 +-
privval/retry_signer_client.go | 14 +-
privval/rpc_signer_connection.go | 1 -
privval/secret_connection.go | 43 +-
privval/signer_client.go | 60 +-
privval/signer_client_test.go | 618 +-
privval/signer_dialer_endpoint.go | 26 +-
privval/signer_endpoint.go | 15 +-
privval/signer_listener_endpoint.go | 46 +-
privval/signer_listener_endpoint_test.go | 82 +-
privval/signer_server.go | 34 +-
privval/socket_dialers_test.go | 18 +-
privval/socket_listeners_test.go | 18 +-
privval/utils.go | 9 -
proto/README.md | 21 +
proto/buf.lock | 7 +
buf.yaml => proto/buf.yaml | 17 +-
proto/tendermint/abci/types.proto | 257 +-
proto/tendermint/blocksync/types.pb.go | 3 +-
proto/tendermint/blocksync/types.proto | 6 +-
proto/tendermint/consensus/types.pb.go | 9 +-
proto/tendermint/consensus/types.proto | 27 +-
proto/tendermint/crypto/crypto.go | 8 +
proto/tendermint/p2p/pex.go | 8 -
proto/tendermint/p2p/pex.pb.go | 826 +-
proto/tendermint/p2p/pex.proto | 23 +-
proto/tendermint/p2p/types.proto | 8 +-
proto/tendermint/privval/service.proto | 2 +-
proto/tendermint/rpc/grpc/types.proto | 32 -
proto/tendermint/state/types.pb.go | 259 +-
proto/tendermint/state/types.proto | 4 +-
proto/tendermint/statesync/message_test.go | 44 +-
proto/tendermint/statesync/types.proto | 9 +-
proto/tendermint/types/canonical.pb.go | 345 +-
proto/tendermint/types/canonical.proto | 9 +
proto/tendermint/types/evidence.pb.go | 3 +-
proto/tendermint/types/evidence.proto | 7 +-
proto/tendermint/types/params.pb.go | 1143 +-
proto/tendermint/types/params.proto | 64 +-
proto/tendermint/types/types.pb.go | 310 +-
proto/tendermint/types/types.proto | 76 +-
proto/tendermint/version/types.pb.go | 6 +-
proto/tendermint/version/types.proto | 6 +-
rpc/client/event_test.go | 199 +-
rpc/client/eventstream/eventstream.go | 193 +
rpc/client/eventstream/eventstream_test.go | 274 +
rpc/client/evidence_test.go | 104 +-
rpc/client/examples_test.go | 131 +-
rpc/client/helpers.go | 138 +-
rpc/client/helpers_test.go | 48 +-
rpc/client/http/http.go | 314 +-
rpc/client/http/ws.go | 111 +-
rpc/client/interface.go | 63 +-
rpc/client/local/local.go | 244 +-
rpc/client/main_test.go | 25 +-
rpc/client/mock/abci.go | 69 +-
rpc/client/mock/abci_test.go | 167 +-
rpc/client/mock/client.go | 66 +-
rpc/client/mock/status_test.go | 55 +-
rpc/client/mocks/abci_client.go | 171 +
rpc/client/mocks/client.go | 87 +-
rpc/client/mocks/events_client.go | 50 +
rpc/client/mocks/evidence_client.go | 52 +
rpc/client/mocks/history_client.go | 96 +
rpc/client/mocks/mempool_client.go | 112 +
rpc/client/mocks/network_client.go | 142 +
rpc/client/mocks/remote_client.go | 87 +-
rpc/client/mocks/sign_client.go | 260 +
rpc/client/mocks/status_client.go | 50 +
rpc/client/mocks/subscription_client.go | 85 +
rpc/client/rpc_test.go | 1427 +-
rpc/coretypes/requests.go | 190 +
rpc/coretypes/responses.go | 255 +-
rpc/coretypes/responses_test.go | 1 +
rpc/grpc/api.go | 41 -
rpc/grpc/client_server.go | 44 -
rpc/grpc/grpc_test.go | 46 -
rpc/grpc/types.pb.go | 924 -
rpc/jsonrpc/client/args_test.go | 39 -
rpc/jsonrpc/client/decode.go | 112 +-
rpc/jsonrpc/client/encode.go | 46 -
rpc/jsonrpc/client/http_json_client.go | 99 +-
rpc/jsonrpc/client/http_json_client_test.go | 10 +-
rpc/jsonrpc/client/http_uri_client.go | 85 -
rpc/jsonrpc/client/integration_test.go | 42 +-
rpc/jsonrpc/client/ws_client.go | 181 +-
rpc/jsonrpc/client/ws_client_test.go | 163 +-
rpc/jsonrpc/doc.go | 2 +-
rpc/jsonrpc/jsonrpc_test.go | 330 +-
rpc/jsonrpc/server/http_json_handler.go | 260 +-
rpc/jsonrpc/server/http_json_handler_test.go | 86 +-
rpc/jsonrpc/server/http_server.go | 309 +-
rpc/jsonrpc/server/http_server_test.go | 106 +-
rpc/jsonrpc/server/http_uri_handler.go | 269 +-
rpc/jsonrpc/server/parse_test.go | 227 +-
rpc/jsonrpc/server/rpc_func.go | 271 +-
rpc/jsonrpc/server/ws_handler.go | 192 +-
rpc/jsonrpc/server/ws_handler_test.go | 37 +-
rpc/jsonrpc/test/main.go | 31 +-
rpc/jsonrpc/types/types.go | 404 +-
rpc/jsonrpc/types/types_test.go | 74 +-
rpc/openapi/openapi.yaml | 226 +-
rpc/test/helpers.go | 74 +-
scripts/authors.sh | 20 +-
scripts/confix/condiff/condiff.go | 152 +
scripts/confix/confix.go | 163 +
scripts/confix/confix_test.go | 99 +
scripts/confix/plan.go | 225 +
scripts/confix/testdata/README.md | 52 +
scripts/confix/testdata/baseline.txt | 73 +
.../confix/testdata/diff-26-27.txt | 0
scripts/confix/testdata/diff-27-28.txt | 3 +
.../confix/testdata/diff-28-29.txt | 0
.../confix/testdata/diff-29-30.txt | 0
scripts/confix/testdata/diff-30-31.txt | 7 +
scripts/confix/testdata/diff-31-32.txt | 5 +
scripts/confix/testdata/diff-32-33.txt | 6 +
scripts/confix/testdata/diff-33-34.txt | 20 +
scripts/confix/testdata/diff-34-35.txt | 31 +
scripts/confix/testdata/diff-35-36.txt | 27 +
scripts/confix/testdata/non-config.toml | 6 +
scripts/confix/testdata/v26-config.toml | 249 +
scripts/confix/testdata/v27-config.toml | 249 +
scripts/confix/testdata/v28-config.toml | 252 +
scripts/confix/testdata/v29-config.toml | 252 +
scripts/confix/testdata/v30-config.toml | 252 +
scripts/confix/testdata/v31-config.toml | 292 +
scripts/confix/testdata/v32-config.toml | 319 +
scripts/confix/testdata/v33-config.toml | 335 +
scripts/confix/testdata/v34-config.toml | 430 +
scripts/confix/testdata/v35-config.toml | 529 +
scripts/confix/testdata/v36-config.toml | 481 +
scripts/estream/estream.go | 81 +
scripts/json2wal/main.go | 14 +-
scripts/keymigrate/migrate.go | 248 +-
scripts/keymigrate/migrate_test.go | 37 +-
scripts/linkpatch/linkpatch.go | 205 +
scripts/protocgen.sh | 9 -
scripts/scmigrate/migrate.go | 197 +
scripts/scmigrate/migrate_test.go | 174 +
scripts/wal2json/main.go | 10 +-
spec/README.md | 81 +
spec/abci++/README.md | 43 +
.../abci++_app_requirements_002_draft.md | 165 +
.../abci++/abci++_basic_concepts_002_draft.md | 404 +
spec/abci++/abci++_methods_002_draft.md | 909 +
...bci++_tmint_expected_behavior_002_draft.md | 218 +
spec/abci++/v0.md | 156 +
spec/abci++/v1.md | 162 +
spec/abci++/v2.md | 180 +
spec/abci++/v3.md | 201 +
spec/abci++/v4.md | 199 +
spec/abci/README.md | 27 +
spec/abci/abci.md | 757 +
spec/abci/apps.md | 685 +
spec/abci/client-server.md | 113 +
spec/consensus/bft-time.md | 55 +
spec/consensus/consensus-paper/IEEEtran.bst | 2417 ++
spec/consensus/consensus-paper/IEEEtran.cls | 4733 ++++
spec/consensus/consensus-paper/README.md | 24 +
.../consensus-paper/algorithmicplus.sty | 195 +
spec/consensus/consensus-paper/conclusion.tex | 16 +
spec/consensus/consensus-paper/consensus.tex | 397 +
.../consensus/consensus-paper/definitions.tex | 126 +
spec/consensus/consensus-paper/homodel.sty | 32 +
spec/consensus/consensus-paper/intro.tex | 138 +
spec/consensus/consensus-paper/latex8.bst | 1124 +
spec/consensus/consensus-paper/latex8.sty | 168 +
spec/consensus/consensus-paper/lit.bib | 1659 ++
spec/consensus/consensus-paper/paper.tex | 153 +
spec/consensus/consensus-paper/proof.tex | 280 +
spec/consensus/consensus-paper/rounddiag.sty | 62 +
spec/consensus/consensus-paper/technote.sty | 118 +
spec/consensus/consensus.md | 352 +
spec/consensus/creating-proposal.md | 43 +
spec/consensus/evidence.md | 199 +
spec/consensus/light-client/README.md | 9 +
spec/consensus/light-client/accountability.md | 3 +
.../light-client/assets/light-node-image.png | Bin 0 -> 122270 bytes
spec/consensus/light-client/detection.md | 3 +
spec/consensus/light-client/verification.md | 3 +
.../proposer-based-timestamp/README.md | 157 +
.../pbts-algorithm_002_draft.md | 148 +
.../pbts-sysmodel_002_draft.md | 357 +
.../proposer-based-timestamp/tla/Apalache.tla | 109 +
.../proposer-based-timestamp/tla/MC_PBT.tla | 77 +
.../tla/TendermintPBT_001_draft.tla | 597 +
.../tla/TendermintPBT_002_draft.tla | 885 +
.../proposer-based-timestamp/tla/typedefs.tla | 39 +
.../v1/pbts-algorithm_001_draft.md | 162 +
.../v1/pbts-sysmodel_001_draft.md | 194 +
.../v1/pbts_001_draft.md | 267 +
spec/consensus/proposer-selection.md | 323 +
spec/consensus/readme.md | 32 +
spec/consensus/signing.md | 229 +
spec/consensus/wal.md | 32 +
spec/core/data_structures.md | 478 +
spec/core/encoding.md | 300 +
spec/core/genesis.md | 35 +
spec/core/readme.md | 13 +
spec/core/state.md | 121 +
spec/ivy-proofs/Dockerfile | 37 +
spec/ivy-proofs/README.md | 33 +
spec/ivy-proofs/abstract_tendermint.ivy | 178 +
spec/ivy-proofs/accountable_safety_1.ivy | 143 +
spec/ivy-proofs/accountable_safety_2.ivy | 52 +
spec/ivy-proofs/check_proofs.sh | 39 +
spec/ivy-proofs/classic_safety.ivy | 85 +
spec/ivy-proofs/count_lines.sh | 13 +
spec/ivy-proofs/docker-compose.yml | 7 +
spec/ivy-proofs/domain_model.ivy | 143 +
spec/ivy-proofs/network_shim.ivy | 133 +
spec/ivy-proofs/output/.gitignore | 4 +
spec/ivy-proofs/tendermint.ivy | 420 +
spec/ivy-proofs/tendermint_test.ivy | 127 +
spec/light-client/README.md | 206 +
.../accountability/001indinv-apalache.csv | 13 +
spec/light-client/accountability/MC_n4_f1.tla | 46 +
spec/light-client/accountability/MC_n4_f2.tla | 46 +
.../accountability/MC_n4_f2_amnesia.tla | 62 +
spec/light-client/accountability/MC_n4_f3.tla | 46 +
spec/light-client/accountability/MC_n5_f1.tla | 46 +
spec/light-client/accountability/MC_n5_f2.tla | 46 +
spec/light-client/accountability/MC_n6_f1.tla | 46 +
spec/light-client/accountability/README.md | 308 +
spec/light-client/accountability/Synopsis.md | 105 +
.../TendermintAccDebug_004_draft.tla | 101 +
.../TendermintAccInv_004_draft.tla | 376 +
.../TendermintAccTrace_004_draft.tla | 37 +
.../TendermintAcc_004_draft.tla | 596 +
.../results/001indinv-apalache-mem-log.svg | 1063 +
.../results/001indinv-apalache-mem.svg | 1141 +
.../results/001indinv-apalache-ncells.svg | 1015 +
.../results/001indinv-apalache-nclauses.svg | 1133 +
.../results/001indinv-apalache-report.md | 61 +
.../results/001indinv-apalache-time-log.svg | 1134 +
.../results/001indinv-apalache-time.svg | 957 +
.../results/001indinv-apalache-unstable.csv | 13 +
spec/light-client/accountability/run.sh | 9 +
spec/light-client/accountability/typedefs.tla | 36 +
spec/light-client/assets/light-node-image.png | Bin 0 -> 122270 bytes
.../attacks/Blockchain_003_draft.tla | 166 +
.../attacks/Isolation_001_draft.tla | 159 +
.../attacks/LCVerificationApi_003_draft.tla | 192 +
spec/light-client/attacks/MC_5_3.tla | 18 +
.../attacks/isolate-attackers_001_draft.md | 222 +
.../attacks/isolate-attackers_002_reviewed.md | 225 +
.../attacks/notes-on-evidence-handling.md | 219 +
.../detection/004bmc-apalache-ok.csv | 10 +
.../detection/005bmc-apalache-error.csv | 4 +
.../detection/Blockchain_003_draft.tla | 164 +
.../detection/LCD_MC3_3_faulty.tla | 27 +
.../detection/LCD_MC3_4_faulty.tla | 27 +
.../detection/LCD_MC4_4_faulty.tla | 27 +
.../detection/LCD_MC5_5_faulty.tla | 27 +
.../detection/LCDetector_003_draft.tla | 373 +
.../detection/LCVerificationApi_003_draft.tla | 192 +
spec/light-client/detection/README.md | 75 +
.../detection/detection_001_reviewed.md | 790 +
.../detection/detection_003_reviewed.md | 841 +
spec/light-client/detection/discussions.md | 178 +
.../light-client/detection/draft-functions.md | 289 +
.../detection/req-ibc-detection.md | 347 +
spec/light-client/experiments.png | Bin 0 -> 83681 bytes
.../supervisor/supervisor_001_draft.md | 639 +
.../supervisor/supervisor_001_draft.tla | 71 +
.../supervisor/supervisor_002_draft.md | 131 +
.../verification/001bmc-apalache.csv | 49 +
.../verification/002bmc-apalache-ok.csv | 55 +
.../verification/003bmc-apalache-error.csv | 45 +
.../verification/004bmc-apalache-ok.csv | 10 +
.../verification/005bmc-apalache-error.csv | 4 +
.../verification/Blockchain_002_draft.tla | 171 +
.../verification/Blockchain_003_draft.tla | 164 +
.../verification/Blockchain_A_1.tla | 171 +
.../LCVerificationApi_003_draft.tla | 192 +
.../verification/Lightclient_002_draft.tla | 465 +
.../verification/Lightclient_003_draft.tla | 493 +
.../verification/Lightclient_A_1.tla | 440 +
.../verification/MC4_3_correct.tla | 26 +
.../verification/MC4_3_faulty.tla | 26 +
.../verification/MC4_4_correct.tla | 26 +
.../verification/MC4_4_correct_drifted.tla | 26 +
.../verification/MC4_4_faulty.tla | 26 +
.../verification/MC4_4_faulty_drifted.tla | 26 +
.../verification/MC4_5_correct.tla | 26 +
.../verification/MC4_5_faulty.tla | 26 +
.../verification/MC4_6_faulty.tla | 26 +
.../verification/MC4_7_faulty.tla | 26 +
.../verification/MC5_5_correct.tla | 26 +
.../MC5_5_correct_peer_two_thirds_faulty.tla | 26 +
.../verification/MC5_5_faulty.tla | 26 +
.../MC5_5_faulty_peer_two_thirds_faulty.tla | 26 +
.../verification/MC5_7_faulty.tla | 26 +
.../verification/MC7_5_faulty.tla | 26 +
.../verification/MC7_7_faulty.tla | 26 +
spec/light-client/verification/README.md | 577 +
.../verification_001_published.md | 1180 +
.../verification/verification_002_draft.md | 1063 +
.../verification/verification_003_draft.md | 76 +
spec/p2p/config.md | 49 +
spec/p2p/connection.md | 111 +
spec/p2p/messages/README.md | 19 +
spec/p2p/messages/block-sync.md | 68 +
spec/p2p/messages/consensus.md | 149 +
spec/p2p/messages/evidence.md | 23 +
spec/p2p/messages/mempool.md | 33 +
spec/p2p/messages/pex.md | 47 +
spec/p2p/messages/state-sync.md | 134 +
spec/p2p/node.md | 65 +
spec/p2p/peer.md | 130 +
spec/p2p/readme.md | 6 +
spec/rpc/README.md | 1382 ++
test/Makefile | 2 +
test/app/grpc_client.go | 42 -
test/app/kvstore_test.sh | 4 +-
test/app/test.sh | 28 +-
test/docker/Dockerfile | 2 +-
test/docker/config-template.toml | 3 +
test/e2e/README.md | 2 +-
test/e2e/app/app.go | 323 +-
test/e2e/app/snapshots.go | 11 +-
test/e2e/app/state.go | 12 +-
test/e2e/generator/generate.go | 66 +-
test/e2e/generator/generate_test.go | 55 +-
test/e2e/generator/main.go | 55 +-
test/e2e/networks/ci.toml | 12 +-
test/e2e/node/main.go | 160 +-
test/e2e/pkg/manifest.go | 7 -
test/e2e/pkg/mockcoreserver/server_test.go | 8 +-
test/e2e/pkg/testnet.go | 11 +-
test/e2e/runner/benchmark.go | 8 +-
test/e2e/runner/cleanup.go | 11 +-
test/e2e/runner/evidence.go | 68 +-
test/e2e/runner/load.go | 3 +-
test/e2e/runner/main.go | 74 +-
test/e2e/runner/perturb.go | 9 +-
test/e2e/runner/rpc.go | 14 +-
test/e2e/runner/setup.go | 45 +-
test/e2e/runner/start.go | 18 +-
test/e2e/runner/test.go | 2 -
test/e2e/runner/wait.go | 7 +-
test/e2e/tests/app_test.go | 54 +-
test/e2e/tests/block_test.go | 10 +-
test/e2e/tests/e2e_test.go | 10 +-
test/e2e/tests/evidence_test.go | 10 +-
test/e2e/tests/net_test.go | 3 +-
test/e2e/tests/validator_test.go | 31 +-
test/fuzz/Makefile | 52 -
test/fuzz/README.md | 62 +-
test/fuzz/mempool/v0/checktx.go | 37 -
test/fuzz/mempool/v0/fuzz_test.go | 33 -
test/fuzz/mempool/v0/testdata/cases/empty | 0
test/fuzz/mempool/v1/checktx.go | 37 -
test/fuzz/mempool/v1/fuzz_test.go | 33 -
test/fuzz/mempool/v1/testdata/cases/empty | 0
test/fuzz/oss-fuzz-build.sh | 26 +-
test/fuzz/p2p/addrbook/fuzz.go | 35 -
test/fuzz/p2p/addrbook/fuzz_test.go | 33 -
test/fuzz/p2p/addrbook/init-corpus/main.go | 59 -
test/fuzz/p2p/addrbook/testdata/cases/empty | 0
test/fuzz/p2p/pex/fuzz_test.go | 33 -
test/fuzz/p2p/pex/init-corpus/main.go | 84 -
test/fuzz/p2p/pex/reactor_receive.go | 95 -
test/fuzz/p2p/pex/testdata/addrbook1 | 1705 --
test/fuzz/p2p/secretconnection/fuzz_test.go | 33 -
.../p2p/secretconnection/init-corpus/main.go | 48 -
test/fuzz/rpc/jsonrpc/server/fuzz_test.go | 33 -
test/fuzz/rpc/jsonrpc/server/handler.go | 63 -
.../1184f5b8d4b6dd08709cf1513f26744167065e0d | 1 -
.../1184f5b8d4b6dd08709cf1513f26744167065e0d | 1 -
.../bbcffb1cdb2cea50fd3dd8c1524905551d0b2e79 | 1 -
...d-fuzz_rpc_jsonrpc_server-4738572803506176 | 1 -
...d-fuzz_rpc_jsonrpc_server-4738572803506176 | 1 -
test/fuzz/tests/mempool_test.go | 33 +
.../p2p_secretconnection_test.go} | 20 +-
test/fuzz/tests/rpc_jsonrpc_server_test.go | 72 +
...cb7440674e67a9e2cc0a4531863076254ada059863 | 2 +
...9a43e0f9fd5c94bba343ce7bb6724d4ebafe311ed4 | 2 +
...a91bcef18e6f24cf368bb4bd248c7a7101ef8e178d | 2 +
...9bad652d355431f5824327271aca6f648e8cd4e786 | 2 +
...9b235928fc1c8c4adbb4635913c204c4724cf47d20 | 2 +
...c8907cb66557347cb9b45709b17da861997d7cabea | 2 +
...b97caa73657b4a78d48e5fd6fc3b1590d24799e803 | 2 +
...c18a7ec4eb3c9e1384af92cfa14cf50951535b6c85 | 2 +
...a91bcef18e6f24cf368bb4bd248c7a7101ef8e178d | 2 +
...0b1d027f749960376c338e14a81e0396ffc6e6d6bd | 2 +
...ea46edb8b7cf7368e90da0cb35888a1452f4d114a2 | 2 +
...5b430076844ebd0b3c4f30f5263b94a3d50f00bce6 | 2 +
...e64b33c804d994cce06781e8c39481411793a8a73f | 2 +
...a91bcef18e6f24cf368bb4bd248c7a7101ef8e178d | 2 +
third_party/proto/gogoproto/gogo.proto | 147 -
tools/proto/Dockerfile | 27 -
tools/tm-signer-harness/Dockerfile | 4 -
tools/tm-signer-harness/Makefile | 21 -
tools/tm-signer-harness/README.md | 5 -
.../internal/test_harness.go | 443 -
.../internal/test_harness_test.go | 256 -
tools/tm-signer-harness/internal/utils.go | 25 -
tools/tm-signer-harness/main.go | 203 -
types/block.go | 163 +-
types/block_meta.go | 12 +-
types/block_meta_test.go | 10 +-
types/block_test.go | 386 +-
types/canonical.go | 14 +-
types/canonical_test.go | 4 +-
types/core_chainlock.go | 4 +-
types/errors_p2p.go | 33 -
types/event_bus.go | 330 -
types/event_bus_test.go | 515 -
types/events.go | 162 +-
types/events_test.go | 16 +
types/evidence.go | 183 +-
types/evidence_test.go | 129 +-
types/genesis.go | 135 +-
types/genesis_test.go | 39 +-
types/keys.go | 6 -
types/light.go | 16 +
types/light_test.go | 26 +-
types/netaddress.go | 61 -
types/netaddress_test.go | 83 +-
types/node_id.go | 3 +-
types/node_info.go | 102 +-
types/node_info_test.go | 127 +-
types/node_key.go | 51 +-
types/node_key_test.go | 4 +-
types/params.go | 241 +-
types/params_test.go | 344 +-
types/part_set.go | 8 +-
types/part_set_test.go | 6 +-
types/priv_validator.go | 35 +-
types/proposal.go | 84 +-
types/proposal_test.go | 168 +-
types/protobuf.go | 8 +-
types/protobuf_test.go | 11 +-
types/results.go | 54 -
types/results_test.go | 54 -
types/signable.go | 18 -
types/stateid.go | 12 +-
types/test_util.go | 78 +-
types/tx.go | 226 +-
types/tx_test.go | 187 +-
types/validation.go | 7 +-
types/validation_test.go | 9 +-
types/validator.go | 50 +-
types/validator_address.go | 6 +-
types/validator_set.go | 14 +-
types/validator_set_test.go | 118 +-
types/validator_test.go | 9 +-
types/vote.go | 240 +-
types/vote_set.go | 63 +-
types/vote_set_test.go | 174 +-
types/vote_test.go | 356 +-
version/version.go | 4 +-
1043 files changed, 107885 insertions(+), 66177 deletions(-)
create mode 100644 .github/ISSUE_TEMPLATE/proposal.md
create mode 100644 .github/workflows/e2e-nightly-35x.yml
create mode 100644 .github/workflows/markdown-links.yml
delete mode 100644 .github/workflows/proto-docker.yml
create mode 100644 .github/workflows/proto-lint.yml
delete mode 100644 .github/workflows/proto.yml
create mode 100644 .markdownlint.yml
create mode 100644 .md-link-check.json
delete mode 100644 DOCKER/.gitignore
delete mode 100644 DOCKER/Dockerfile.build_c-amazonlinux
delete mode 100644 DOCKER/Dockerfile.testing
delete mode 100644 DOCKER/Makefile
delete mode 100755 DOCKER/build.sh
delete mode 100755 DOCKER/push.sh
create mode 100644 RELEASES.md
delete mode 100644 abci/client/creators.go
delete mode 100644 abci/client/socket_client_test.go
delete mode 100644 abci/types/client.go
create mode 100644 abci/types/mocks/application.go
rename abci/types/{result.go => types.go} (59%)
create mode 100644 abci/types/types_test.go
delete mode 100644 abci/version/version.go
create mode 100644 buf.work.yaml
create mode 100644 cmd/tenderdash/commands/completion.go
delete mode 100644 cmd/tenderdash/commands/probe_upnp.go
create mode 100644 cmd/tenderdash/commands/reset.go
delete mode 100644 cmd/tenderdash/commands/reset_priv_validator.go
create mode 100644 cmd/tenderdash/commands/reset_test.go
create mode 100644 crypto/crypto_test.go
delete mode 100644 crypto/example_test.go
delete mode 100644 crypto/hash.go
delete mode 100644 crypto/tmhash/hash.go
delete mode 100644 crypto/tmhash/hash_test.go
delete mode 100644 crypto/version.go
delete mode 100644 crypto/xchacha20poly1305/vector_test.go
delete mode 100644 crypto/xchacha20poly1305/xchachapoly.go
delete mode 100644 crypto/xchacha20poly1305/xchachapoly_test.go
delete mode 100644 crypto/xsalsa20symmetric/symmetric.go
delete mode 100644 crypto/xsalsa20symmetric/symmetric_test.go
rename {dashcore/rpc => dash/core}/client.go (92%)
rename {dashcore/rpc => dash/core}/mock.go (92%)
create mode 100644 docs/architecture/adr-073-libp2p.md
create mode 100644 docs/architecture/adr-074-timeout-params.md
create mode 100644 docs/architecture/adr-075-rpc-subscription.md
create mode 100644 docs/architecture/adr-076-combine-spec-repo.md
create mode 100644 docs/architecture/adr-077-block-retention.md
create mode 100644 docs/architecture/adr-078-nonzero-genesis.md
create mode 100644 docs/architecture/adr-079-ed25519-verification.md
create mode 100644 docs/architecture/adr-080-reverse-sync.md
create mode 100644 docs/architecture/adr-081-protobuf-mgmt.md
delete mode 100644 docs/networks/README.md
create mode 100755 docs/presubmit.sh
create mode 100644 docs/rfc/images/abci++.png
create mode 100644 docs/rfc/images/abci.png
create mode 100644 docs/rfc/rfc-006-event-subscription.md
create mode 100644 docs/rfc/rfc-007-deterministic-proto-bytes.md
create mode 100644 docs/rfc/rfc-008-do-not-panic.md
create mode 100644 docs/rfc/rfc-009-consensus-parameter-upgrades.md
create mode 100644 docs/rfc/rfc-010-p2p-light-client.rst
create mode 100644 docs/rfc/rfc-011-delete-gas.md
create mode 100644 docs/rfc/rfc-012-custom-indexing.md
create mode 100644 docs/rfc/rfc-013-abci++.md
create mode 100644 docs/rfc/rfc-014-semantic-versioning.md
create mode 100644 docs/rfc/rfc-015-abci++-tx-mutation.md
create mode 100644 docs/rfc/rfc-019-config-version.md
create mode 100644 docs/tendermint-core/block-sync/img/block-retention.png
create mode 100644 docs/tendermint-core/consensus/proposer-based-timestamps.md
create mode 100644 docs/tools/debugging/proposer-based-timestamps-runbook.md
rename docs/{networks => tools}/docker-compose.md (100%)
delete mode 100644 docs/tools/remote-signer-validation.md
rename docs/{networks => tools}/terraform-and-ansible.md (99%)
rename internal/blocksync/{v0 => }/pool.go (84%)
rename internal/blocksync/{v0 => }/pool_test.go (85%)
rename internal/blocksync/{v0 => }/reactor.go (56%)
rename internal/blocksync/{v0 => }/reactor_test.go (66%)
delete mode 100644 internal/blocksync/v2/internal/behavior/doc.go
delete mode 100644 internal/blocksync/v2/internal/behavior/peer_behaviour.go
delete mode 100644 internal/blocksync/v2/internal/behavior/reporter.go
delete mode 100644 internal/blocksync/v2/internal/behavior/reporter_test.go
delete mode 100644 internal/blocksync/v2/io.go
delete mode 100644 internal/blocksync/v2/metrics.go
delete mode 100644 internal/blocksync/v2/processor.go
delete mode 100644 internal/blocksync/v2/processor_context.go
delete mode 100644 internal/blocksync/v2/processor_test.go
delete mode 100644 internal/blocksync/v2/reactor.go
delete mode 100644 internal/blocksync/v2/reactor_test.go
delete mode 100644 internal/blocksync/v2/routine.go
delete mode 100644 internal/blocksync/v2/routine_test.go
delete mode 100644 internal/blocksync/v2/scheduler.go
delete mode 100644 internal/blocksync/v2/scheduler_test.go
delete mode 100644 internal/blocksync/v2/types.go
delete mode 100644 internal/consensus/README.md
delete mode 100644 internal/consensus/mocks/cons_sync_reactor.go
create mode 100644 internal/consensus/pbts_test.go
create mode 100644 internal/consensus/peer_state_test.go
create mode 100644 internal/eventbus/event_bus.go
create mode 100644 internal/eventbus/event_bus_test.go
create mode 100644 internal/eventlog/cursor/cursor.go
create mode 100644 internal/eventlog/cursor/cursor_test.go
create mode 100644 internal/eventlog/eventlog.go
create mode 100644 internal/eventlog/eventlog_test.go
create mode 100644 internal/eventlog/item.go
create mode 100644 internal/eventlog/metrics.go
create mode 100644 internal/eventlog/prune.go
create mode 100644 internal/evidence/metrics.go
create mode 100644 internal/jsontypes/jsontypes.go
create mode 100644 internal/jsontypes/jsontypes_test.go
rename {libs => internal/libs}/async/async.go (100%)
rename {libs => internal/libs}/async/async_test.go (100%)
delete mode 100644 internal/libs/fail/fail.go
delete mode 100644 internal/libs/flowrate/README.md
delete mode 100644 internal/libs/flowrate/io.go
delete mode 100644 internal/libs/flowrate/io_test.go
create mode 100644 internal/libs/queue/queue.go
create mode 100644 internal/libs/queue/queue_test.go
delete mode 100644 internal/libs/sync/closer.go
delete mode 100644 internal/libs/sync/closer_test.go
delete mode 100644 internal/libs/sync/deadlock.go
delete mode 100644 internal/libs/sync/sync.go
create mode 100644 internal/mempool/mempool_bench_test.go
rename internal/mempool/{v1 => }/mempool_test.go (61%)
delete mode 100644 internal/mempool/mock/mempool.go
create mode 100644 internal/mempool/mocks/mempool.go
rename internal/mempool/{v1 => }/priority_queue.go (97%)
rename internal/mempool/{v1 => }/priority_queue_test.go (99%)
rename internal/mempool/{v1 => }/reactor.go (56%)
create mode 100644 internal/mempool/reactor_test.go
rename internal/mempool/{v1 => }/tx_test.go (99%)
create mode 100644 internal/mempool/types.go
delete mode 100644 internal/mempool/v0/bench_test.go
delete mode 100644 internal/mempool/v0/cache_test.go
delete mode 100644 internal/mempool/v0/clist_mempool.go
delete mode 100644 internal/mempool/v0/clist_mempool_test.go
delete mode 100644 internal/mempool/v0/doc.go
delete mode 100644 internal/mempool/v0/reactor.go
delete mode 100644 internal/mempool/v0/reactor_test.go
delete mode 100644 internal/mempool/v1/mempool.go
delete mode 100644 internal/mempool/v1/mempool_bench_test.go
delete mode 100644 internal/mempool/v1/reactor_test.go
delete mode 100644 internal/mempool/v1/tx.go
delete mode 100644 internal/p2p/base_reactor.go
create mode 100644 internal/p2p/channel.go
create mode 100644 internal/p2p/channel_test.go
delete mode 100644 internal/p2p/conn/conn_go110.go
delete mode 100644 internal/p2p/conn/conn_notgo110.go
delete mode 100644 internal/p2p/conn_set.go
delete mode 100644 internal/p2p/mock/peer.go
delete mode 100644 internal/p2p/mock/reactor.go
delete mode 100644 internal/p2p/mocks/peer.go
delete mode 100644 internal/p2p/netaddress.go
delete mode 100644 internal/p2p/peer.go
delete mode 100644 internal/p2p/peer_set.go
delete mode 100644 internal/p2p/peer_set_test.go
delete mode 100644 internal/p2p/peer_test.go
delete mode 100644 internal/p2p/pex/addrbook.go
delete mode 100644 internal/p2p/pex/addrbook_test.go
delete mode 100644 internal/p2p/pex/bench_test.go
delete mode 100644 internal/p2p/pex/errors.go
delete mode 100644 internal/p2p/pex/file.go
delete mode 100644 internal/p2p/pex/known_address.go
delete mode 100644 internal/p2p/pex/params.go
delete mode 100644 internal/p2p/pex/pex_reactor.go
delete mode 100644 internal/p2p/pex/pex_reactor_test.go
delete mode 100644 internal/p2p/shim.go
delete mode 100644 internal/p2p/shim_test.go
delete mode 100644 internal/p2p/switch.go
delete mode 100644 internal/p2p/switch_test.go
delete mode 100644 internal/p2p/test_util.go
delete mode 100644 internal/p2p/trust/config.go
delete mode 100644 internal/p2p/trust/metric.go
delete mode 100644 internal/p2p/trust/metric_test.go
delete mode 100644 internal/p2p/trust/store.go
delete mode 100644 internal/p2p/trust/store_test.go
delete mode 100644 internal/p2p/trust/ticker.go
delete mode 100644 internal/p2p/upnp/probe.go
delete mode 100644 internal/p2p/upnp/upnp.go
delete mode 100644 internal/p2p/wdrr_queue.go
delete mode 100644 internal/p2p/wdrr_queue_test.go
delete mode 100644 internal/proxy/app_conn.go
delete mode 100644 internal/proxy/app_conn_test.go
create mode 100644 internal/proxy/client_test.go
delete mode 100644 internal/proxy/multi_app_conn.go
delete mode 100644 internal/proxy/multi_app_conn_test.go
create mode 100644 internal/pubsub/example_test.go
create mode 100644 internal/pubsub/pubsub.go
create mode 100644 internal/pubsub/pubsub_test.go
rename {libs => internal}/pubsub/query/bench_test.go (85%)
rename {libs => internal}/pubsub/query/query.go (94%)
rename {libs => internal}/pubsub/query/query_test.go (94%)
rename {libs => internal}/pubsub/query/syntax/doc.go (100%)
rename {libs => internal}/pubsub/query/syntax/parser.go (100%)
rename {libs => internal}/pubsub/query/syntax/scanner.go (100%)
rename {libs => internal}/pubsub/query/syntax/syntax_test.go (98%)
create mode 100644 internal/pubsub/subindex.go
create mode 100644 internal/pubsub/subscription.go
delete mode 100644 internal/rpc/core/net_test.go
delete mode 100644 internal/state/time.go
delete mode 100644 internal/state/time_test.go
create mode 100644 internal/test/factory/params.go
delete mode 100644 libs/cmap/cmap.go
delete mode 100644 libs/cmap/cmap_test.go
delete mode 100644 libs/events/Makefile
delete mode 100644 libs/events/README.md
delete mode 100644 libs/events/event_cache.go
delete mode 100644 libs/events/event_cache_test.go
delete mode 100644 libs/json/decoder.go
delete mode 100644 libs/json/decoder_test.go
delete mode 100644 libs/json/doc.go
delete mode 100644 libs/json/encoder.go
delete mode 100644 libs/json/encoder_test.go
delete mode 100644 libs/json/helpers_test.go
delete mode 100644 libs/json/structs.go
delete mode 100644 libs/json/types.go
delete mode 100644 libs/pubsub/example_test.go
delete mode 100644 libs/pubsub/pubsub.go
delete mode 100644 libs/pubsub/pubsub_test.go
delete mode 100644 libs/pubsub/subscription.go
delete mode 100644 libs/sync/atomic_bool.go
delete mode 100644 libs/sync/atomic_bool_test.go
create mode 100644 libs/time/mocks/source.go
create mode 100644 node/seed.go
delete mode 100644 privval/rpc_signer_connection.go
create mode 100644 proto/README.md
create mode 100644 proto/buf.lock
rename buf.yaml => proto/buf.yaml (50%)
create mode 100644 proto/tendermint/crypto/crypto.go
delete mode 100644 proto/tendermint/rpc/grpc/types.proto
create mode 100644 rpc/client/eventstream/eventstream.go
create mode 100644 rpc/client/eventstream/eventstream_test.go
create mode 100644 rpc/client/mocks/abci_client.go
create mode 100644 rpc/client/mocks/events_client.go
create mode 100644 rpc/client/mocks/evidence_client.go
create mode 100644 rpc/client/mocks/history_client.go
create mode 100644 rpc/client/mocks/mempool_client.go
create mode 100644 rpc/client/mocks/network_client.go
create mode 100644 rpc/client/mocks/sign_client.go
create mode 100644 rpc/client/mocks/status_client.go
create mode 100644 rpc/client/mocks/subscription_client.go
create mode 100644 rpc/coretypes/requests.go
delete mode 100644 rpc/grpc/api.go
delete mode 100644 rpc/grpc/client_server.go
delete mode 100644 rpc/grpc/grpc_test.go
delete mode 100644 rpc/grpc/types.pb.go
delete mode 100644 rpc/jsonrpc/client/args_test.go
delete mode 100644 rpc/jsonrpc/client/encode.go
delete mode 100644 rpc/jsonrpc/client/http_uri_client.go
create mode 100644 scripts/confix/condiff/condiff.go
create mode 100644 scripts/confix/confix.go
create mode 100644 scripts/confix/confix_test.go
create mode 100644 scripts/confix/plan.go
create mode 100644 scripts/confix/testdata/README.md
create mode 100644 scripts/confix/testdata/baseline.txt
rename test/fuzz/rpc/jsonrpc/server/testdata/cases/empty => scripts/confix/testdata/diff-26-27.txt (100%)
create mode 100644 scripts/confix/testdata/diff-27-28.txt
rename test/fuzz/p2p/secretconnection/testdata/cases/empty => scripts/confix/testdata/diff-28-29.txt (100%)
rename test/fuzz/p2p/pex/testdata/cases/empty => scripts/confix/testdata/diff-29-30.txt (100%)
create mode 100644 scripts/confix/testdata/diff-30-31.txt
create mode 100644 scripts/confix/testdata/diff-31-32.txt
create mode 100644 scripts/confix/testdata/diff-32-33.txt
create mode 100644 scripts/confix/testdata/diff-33-34.txt
create mode 100644 scripts/confix/testdata/diff-34-35.txt
create mode 100644 scripts/confix/testdata/diff-35-36.txt
create mode 100644 scripts/confix/testdata/non-config.toml
create mode 100644 scripts/confix/testdata/v26-config.toml
create mode 100644 scripts/confix/testdata/v27-config.toml
create mode 100644 scripts/confix/testdata/v28-config.toml
create mode 100644 scripts/confix/testdata/v29-config.toml
create mode 100644 scripts/confix/testdata/v30-config.toml
create mode 100644 scripts/confix/testdata/v31-config.toml
create mode 100644 scripts/confix/testdata/v32-config.toml
create mode 100644 scripts/confix/testdata/v33-config.toml
create mode 100644 scripts/confix/testdata/v34-config.toml
create mode 100644 scripts/confix/testdata/v35-config.toml
create mode 100644 scripts/confix/testdata/v36-config.toml
create mode 100644 scripts/estream/estream.go
create mode 100644 scripts/linkpatch/linkpatch.go
delete mode 100755 scripts/protocgen.sh
create mode 100644 scripts/scmigrate/migrate.go
create mode 100644 scripts/scmigrate/migrate_test.go
create mode 100644 spec/README.md
create mode 100644 spec/abci++/README.md
create mode 100644 spec/abci++/abci++_app_requirements_002_draft.md
create mode 100644 spec/abci++/abci++_basic_concepts_002_draft.md
create mode 100644 spec/abci++/abci++_methods_002_draft.md
create mode 100644 spec/abci++/abci++_tmint_expected_behavior_002_draft.md
create mode 100644 spec/abci++/v0.md
create mode 100644 spec/abci++/v1.md
create mode 100644 spec/abci++/v2.md
create mode 100644 spec/abci++/v3.md
create mode 100644 spec/abci++/v4.md
create mode 100644 spec/abci/README.md
create mode 100644 spec/abci/abci.md
create mode 100644 spec/abci/apps.md
create mode 100644 spec/abci/client-server.md
create mode 100644 spec/consensus/bft-time.md
create mode 100644 spec/consensus/consensus-paper/IEEEtran.bst
create mode 100644 spec/consensus/consensus-paper/IEEEtran.cls
create mode 100644 spec/consensus/consensus-paper/README.md
create mode 100644 spec/consensus/consensus-paper/algorithmicplus.sty
create mode 100644 spec/consensus/consensus-paper/conclusion.tex
create mode 100644 spec/consensus/consensus-paper/consensus.tex
create mode 100644 spec/consensus/consensus-paper/definitions.tex
create mode 100644 spec/consensus/consensus-paper/homodel.sty
create mode 100644 spec/consensus/consensus-paper/intro.tex
create mode 100644 spec/consensus/consensus-paper/latex8.bst
create mode 100644 spec/consensus/consensus-paper/latex8.sty
create mode 100644 spec/consensus/consensus-paper/lit.bib
create mode 100644 spec/consensus/consensus-paper/paper.tex
create mode 100644 spec/consensus/consensus-paper/proof.tex
create mode 100644 spec/consensus/consensus-paper/rounddiag.sty
create mode 100644 spec/consensus/consensus-paper/technote.sty
create mode 100644 spec/consensus/consensus.md
create mode 100644 spec/consensus/creating-proposal.md
create mode 100644 spec/consensus/evidence.md
create mode 100644 spec/consensus/light-client/README.md
create mode 100644 spec/consensus/light-client/accountability.md
create mode 100644 spec/consensus/light-client/assets/light-node-image.png
create mode 100644 spec/consensus/light-client/detection.md
create mode 100644 spec/consensus/light-client/verification.md
create mode 100644 spec/consensus/proposer-based-timestamp/README.md
create mode 100644 spec/consensus/proposer-based-timestamp/pbts-algorithm_002_draft.md
create mode 100644 spec/consensus/proposer-based-timestamp/pbts-sysmodel_002_draft.md
create mode 100644 spec/consensus/proposer-based-timestamp/tla/Apalache.tla
create mode 100644 spec/consensus/proposer-based-timestamp/tla/MC_PBT.tla
create mode 100644 spec/consensus/proposer-based-timestamp/tla/TendermintPBT_001_draft.tla
create mode 100644 spec/consensus/proposer-based-timestamp/tla/TendermintPBT_002_draft.tla
create mode 100644 spec/consensus/proposer-based-timestamp/tla/typedefs.tla
create mode 100644 spec/consensus/proposer-based-timestamp/v1/pbts-algorithm_001_draft.md
create mode 100644 spec/consensus/proposer-based-timestamp/v1/pbts-sysmodel_001_draft.md
create mode 100644 spec/consensus/proposer-based-timestamp/v1/pbts_001_draft.md
create mode 100644 spec/consensus/proposer-selection.md
create mode 100644 spec/consensus/readme.md
create mode 100644 spec/consensus/signing.md
create mode 100644 spec/consensus/wal.md
create mode 100644 spec/core/data_structures.md
create mode 100644 spec/core/encoding.md
create mode 100644 spec/core/genesis.md
create mode 100644 spec/core/readme.md
create mode 100644 spec/core/state.md
create mode 100644 spec/ivy-proofs/Dockerfile
create mode 100644 spec/ivy-proofs/README.md
create mode 100644 spec/ivy-proofs/abstract_tendermint.ivy
create mode 100644 spec/ivy-proofs/accountable_safety_1.ivy
create mode 100644 spec/ivy-proofs/accountable_safety_2.ivy
create mode 100755 spec/ivy-proofs/check_proofs.sh
create mode 100644 spec/ivy-proofs/classic_safety.ivy
create mode 100755 spec/ivy-proofs/count_lines.sh
create mode 100644 spec/ivy-proofs/docker-compose.yml
create mode 100644 spec/ivy-proofs/domain_model.ivy
create mode 100644 spec/ivy-proofs/network_shim.ivy
create mode 100644 spec/ivy-proofs/output/.gitignore
create mode 100644 spec/ivy-proofs/tendermint.ivy
create mode 100644 spec/ivy-proofs/tendermint_test.ivy
create mode 100644 spec/light-client/README.md
create mode 100644 spec/light-client/accountability/001indinv-apalache.csv
create mode 100644 spec/light-client/accountability/MC_n4_f1.tla
create mode 100644 spec/light-client/accountability/MC_n4_f2.tla
create mode 100644 spec/light-client/accountability/MC_n4_f2_amnesia.tla
create mode 100644 spec/light-client/accountability/MC_n4_f3.tla
create mode 100644 spec/light-client/accountability/MC_n5_f1.tla
create mode 100644 spec/light-client/accountability/MC_n5_f2.tla
create mode 100644 spec/light-client/accountability/MC_n6_f1.tla
create mode 100644 spec/light-client/accountability/README.md
create mode 100644 spec/light-client/accountability/Synopsis.md
create mode 100644 spec/light-client/accountability/TendermintAccDebug_004_draft.tla
create mode 100644 spec/light-client/accountability/TendermintAccInv_004_draft.tla
create mode 100644 spec/light-client/accountability/TendermintAccTrace_004_draft.tla
create mode 100644 spec/light-client/accountability/TendermintAcc_004_draft.tla
create mode 100644 spec/light-client/accountability/results/001indinv-apalache-mem-log.svg
create mode 100644 spec/light-client/accountability/results/001indinv-apalache-mem.svg
create mode 100644 spec/light-client/accountability/results/001indinv-apalache-ncells.svg
create mode 100644 spec/light-client/accountability/results/001indinv-apalache-nclauses.svg
create mode 100644 spec/light-client/accountability/results/001indinv-apalache-report.md
create mode 100644 spec/light-client/accountability/results/001indinv-apalache-time-log.svg
create mode 100644 spec/light-client/accountability/results/001indinv-apalache-time.svg
create mode 100644 spec/light-client/accountability/results/001indinv-apalache-unstable.csv
create mode 100755 spec/light-client/accountability/run.sh
create mode 100644 spec/light-client/accountability/typedefs.tla
create mode 100644 spec/light-client/assets/light-node-image.png
create mode 100644 spec/light-client/attacks/Blockchain_003_draft.tla
create mode 100644 spec/light-client/attacks/Isolation_001_draft.tla
create mode 100644 spec/light-client/attacks/LCVerificationApi_003_draft.tla
create mode 100644 spec/light-client/attacks/MC_5_3.tla
create mode 100644 spec/light-client/attacks/isolate-attackers_001_draft.md
create mode 100644 spec/light-client/attacks/isolate-attackers_002_reviewed.md
create mode 100644 spec/light-client/attacks/notes-on-evidence-handling.md
create mode 100644 spec/light-client/detection/004bmc-apalache-ok.csv
create mode 100644 spec/light-client/detection/005bmc-apalache-error.csv
create mode 100644 spec/light-client/detection/Blockchain_003_draft.tla
create mode 100644 spec/light-client/detection/LCD_MC3_3_faulty.tla
create mode 100644 spec/light-client/detection/LCD_MC3_4_faulty.tla
create mode 100644 spec/light-client/detection/LCD_MC4_4_faulty.tla
create mode 100644 spec/light-client/detection/LCD_MC5_5_faulty.tla
create mode 100644 spec/light-client/detection/LCDetector_003_draft.tla
create mode 100644 spec/light-client/detection/LCVerificationApi_003_draft.tla
create mode 100644 spec/light-client/detection/README.md
create mode 100644 spec/light-client/detection/detection_001_reviewed.md
create mode 100644 spec/light-client/detection/detection_003_reviewed.md
create mode 100644 spec/light-client/detection/discussions.md
create mode 100644 spec/light-client/detection/draft-functions.md
create mode 100644 spec/light-client/detection/req-ibc-detection.md
create mode 100644 spec/light-client/experiments.png
create mode 100644 spec/light-client/supervisor/supervisor_001_draft.md
create mode 100644 spec/light-client/supervisor/supervisor_001_draft.tla
create mode 100644 spec/light-client/supervisor/supervisor_002_draft.md
create mode 100644 spec/light-client/verification/001bmc-apalache.csv
create mode 100644 spec/light-client/verification/002bmc-apalache-ok.csv
create mode 100644 spec/light-client/verification/003bmc-apalache-error.csv
create mode 100644 spec/light-client/verification/004bmc-apalache-ok.csv
create mode 100644 spec/light-client/verification/005bmc-apalache-error.csv
create mode 100644 spec/light-client/verification/Blockchain_002_draft.tla
create mode 100644 spec/light-client/verification/Blockchain_003_draft.tla
create mode 100644 spec/light-client/verification/Blockchain_A_1.tla
create mode 100644 spec/light-client/verification/LCVerificationApi_003_draft.tla
create mode 100644 spec/light-client/verification/Lightclient_002_draft.tla
create mode 100644 spec/light-client/verification/Lightclient_003_draft.tla
create mode 100644 spec/light-client/verification/Lightclient_A_1.tla
create mode 100644 spec/light-client/verification/MC4_3_correct.tla
create mode 100644 spec/light-client/verification/MC4_3_faulty.tla
create mode 100644 spec/light-client/verification/MC4_4_correct.tla
create mode 100644 spec/light-client/verification/MC4_4_correct_drifted.tla
create mode 100644 spec/light-client/verification/MC4_4_faulty.tla
create mode 100644 spec/light-client/verification/MC4_4_faulty_drifted.tla
create mode 100644 spec/light-client/verification/MC4_5_correct.tla
create mode 100644 spec/light-client/verification/MC4_5_faulty.tla
create mode 100644 spec/light-client/verification/MC4_6_faulty.tla
create mode 100644 spec/light-client/verification/MC4_7_faulty.tla
create mode 100644 spec/light-client/verification/MC5_5_correct.tla
create mode 100644 spec/light-client/verification/MC5_5_correct_peer_two_thirds_faulty.tla
create mode 100644 spec/light-client/verification/MC5_5_faulty.tla
create mode 100644 spec/light-client/verification/MC5_5_faulty_peer_two_thirds_faulty.tla
create mode 100644 spec/light-client/verification/MC5_7_faulty.tla
create mode 100644 spec/light-client/verification/MC7_5_faulty.tla
create mode 100644 spec/light-client/verification/MC7_7_faulty.tla
create mode 100644 spec/light-client/verification/README.md
create mode 100644 spec/light-client/verification/verification_001_published.md
create mode 100644 spec/light-client/verification/verification_002_draft.md
create mode 100644 spec/light-client/verification/verification_003_draft.md
create mode 100644 spec/p2p/config.md
create mode 100644 spec/p2p/connection.md
create mode 100644 spec/p2p/messages/README.md
create mode 100644 spec/p2p/messages/block-sync.md
create mode 100644 spec/p2p/messages/consensus.md
create mode 100644 spec/p2p/messages/evidence.md
create mode 100644 spec/p2p/messages/mempool.md
create mode 100644 spec/p2p/messages/pex.md
create mode 100644 spec/p2p/messages/state-sync.md
create mode 100644 spec/p2p/node.md
create mode 100644 spec/p2p/peer.md
create mode 100644 spec/p2p/readme.md
create mode 100644 spec/rpc/README.md
delete mode 100644 test/app/grpc_client.go
delete mode 100644 test/fuzz/Makefile
delete mode 100644 test/fuzz/mempool/v0/checktx.go
delete mode 100644 test/fuzz/mempool/v0/fuzz_test.go
delete mode 100644 test/fuzz/mempool/v0/testdata/cases/empty
delete mode 100644 test/fuzz/mempool/v1/checktx.go
delete mode 100644 test/fuzz/mempool/v1/fuzz_test.go
delete mode 100644 test/fuzz/mempool/v1/testdata/cases/empty
delete mode 100644 test/fuzz/p2p/addrbook/fuzz.go
delete mode 100644 test/fuzz/p2p/addrbook/fuzz_test.go
delete mode 100644 test/fuzz/p2p/addrbook/init-corpus/main.go
delete mode 100644 test/fuzz/p2p/addrbook/testdata/cases/empty
delete mode 100644 test/fuzz/p2p/pex/fuzz_test.go
delete mode 100644 test/fuzz/p2p/pex/init-corpus/main.go
delete mode 100644 test/fuzz/p2p/pex/reactor_receive.go
delete mode 100644 test/fuzz/p2p/pex/testdata/addrbook1
delete mode 100644 test/fuzz/p2p/secretconnection/fuzz_test.go
delete mode 100644 test/fuzz/p2p/secretconnection/init-corpus/main.go
delete mode 100644 test/fuzz/rpc/jsonrpc/server/fuzz_test.go
delete mode 100644 test/fuzz/rpc/jsonrpc/server/handler.go
delete mode 100644 test/fuzz/rpc/jsonrpc/server/testdata/1184f5b8d4b6dd08709cf1513f26744167065e0d
delete mode 100644 test/fuzz/rpc/jsonrpc/server/testdata/cases/1184f5b8d4b6dd08709cf1513f26744167065e0d
delete mode 100644 test/fuzz/rpc/jsonrpc/server/testdata/cases/bbcffb1cdb2cea50fd3dd8c1524905551d0b2e79
delete mode 100644 test/fuzz/rpc/jsonrpc/server/testdata/cases/clusterfuzz-testcase-minimized-fuzz_rpc_jsonrpc_server-4738572803506176
delete mode 100644 test/fuzz/rpc/jsonrpc/server/testdata/clusterfuzz-testcase-minimized-fuzz_rpc_jsonrpc_server-4738572803506176
create mode 100644 test/fuzz/tests/mempool_test.go
rename test/fuzz/{p2p/secretconnection/read_write.go => tests/p2p_secretconnection_test.go} (92%)
create mode 100644 test/fuzz/tests/rpc_jsonrpc_server_test.go
create mode 100644 test/fuzz/tests/testdata/fuzz/FuzzMempool/1daffc1033a0bfc7f0c2bccb7440674e67a9e2cc0a4531863076254ada059863
create mode 100644 test/fuzz/tests/testdata/fuzz/FuzzMempool/582528ddfad69eb57775199a43e0f9fd5c94bba343ce7bb6724d4ebafe311ed4
create mode 100644 test/fuzz/tests/testdata/fuzz/FuzzMempool/d40a98862ed393eb712e47a91bcef18e6f24cf368bb4bd248c7a7101ef8e178d
create mode 100644 test/fuzz/tests/testdata/fuzz/FuzzP2PSecretConnection/0f1a3d10e4d642e42a3ccd9bad652d355431f5824327271aca6f648e8cd4e786
create mode 100644 test/fuzz/tests/testdata/fuzz/FuzzP2PSecretConnection/172c521d1c5e7a5cce55e39b235928fc1c8c4adbb4635913c204c4724cf47d20
create mode 100644 test/fuzz/tests/testdata/fuzz/FuzzP2PSecretConnection/a9481542b8154bfe8fe868c8907cb66557347cb9b45709b17da861997d7cabea
create mode 100644 test/fuzz/tests/testdata/fuzz/FuzzP2PSecretConnection/ba3758980fe724f83bdf1cb97caa73657b4a78d48e5fd6fc3b1590d24799e803
create mode 100644 test/fuzz/tests/testdata/fuzz/FuzzP2PSecretConnection/c22ff3cdf5145a03ecc6a2c18a7ec4eb3c9e1384af92cfa14cf50951535b6c85
create mode 100644 test/fuzz/tests/testdata/fuzz/FuzzP2PSecretConnection/d40a98862ed393eb712e47a91bcef18e6f24cf368bb4bd248c7a7101ef8e178d
create mode 100644 test/fuzz/tests/testdata/fuzz/FuzzP2PSecretConnection/dc7304b2cddeadd08647d30b1d027f749960376c338e14a81e0396ffc6e6d6bd
create mode 100644 test/fuzz/tests/testdata/fuzz/FuzzRPCJSONRPCServer/058ae08103537df220789dea46edb8b7cf7368e90da0cb35888a1452f4d114a2
create mode 100644 test/fuzz/tests/testdata/fuzz/FuzzRPCJSONRPCServer/2ab633cb322fca9e76fc965b430076844ebd0b3c4f30f5263b94a3d50f00bce6
create mode 100644 test/fuzz/tests/testdata/fuzz/FuzzRPCJSONRPCServer/aadb440fa55da05c1185e3e64b33c804d994cce06781e8c39481411793a8a73f
create mode 100644 test/fuzz/tests/testdata/fuzz/FuzzRPCJSONRPCServer/d40a98862ed393eb712e47a91bcef18e6f24cf368bb4bd248c7a7101ef8e178d
delete mode 100644 third_party/proto/gogoproto/gogo.proto
delete mode 100644 tools/proto/Dockerfile
delete mode 100644 tools/tm-signer-harness/Dockerfile
delete mode 100644 tools/tm-signer-harness/Makefile
delete mode 100644 tools/tm-signer-harness/README.md
delete mode 100644 tools/tm-signer-harness/internal/test_harness.go
delete mode 100644 tools/tm-signer-harness/internal/test_harness_test.go
delete mode 100644 tools/tm-signer-harness/internal/utils.go
delete mode 100644 tools/tm-signer-harness/main.go
delete mode 100644 types/errors_p2p.go
delete mode 100644 types/event_bus.go
delete mode 100644 types/event_bus_test.go
delete mode 100644 types/keys.go
delete mode 100644 types/results.go
delete mode 100644 types/results_test.go
diff --git a/.github/ISSUE_TEMPLATE/proposal.md b/.github/ISSUE_TEMPLATE/proposal.md
new file mode 100644
index 0000000000..45f0bff42f
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/proposal.md
@@ -0,0 +1,37 @@
+---
+name: Protocol Change Proposal
+about: Create a proposal to request a change to the protocol
+
+---
+
+
+
+# Protocol Change Proposal
+
+## Summary
+
+
+
+## Problem Definition
+
+
+
+## Proposal
+
+
+
+____
+
+#### For Admin Use
+
+- [ ] Not duplicate issue
+- [ ] Appropriate labels applied
+- [ ] Appropriate contributors tagged
+- [ ] Contributor assigned/self-assigned
diff --git a/.github/dependabot.yml b/.github/dependabot.yml
index 9960106ffd..4bd765afd2 100644
--- a/.github/dependabot.yml
+++ b/.github/dependabot.yml
@@ -3,8 +3,7 @@ updates:
- package-ecosystem: github-actions
directory: "/"
schedule:
- interval: daily
- time: "11:00"
+ interval: weekly
open-pull-requests-limit: 10
# - package-ecosystem: npm
# directory: "/docs"
@@ -18,7 +17,7 @@ updates:
directory: "/"
schedule:
interval: daily
- time: "11:00"
+ target-branch: "v0.35.x"
open-pull-requests-limit: 10
reviewers:
- shotonoff
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index de908193c6..ebc0f58ee2 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -1,4 +1,3 @@
----
name: Build
# Tests runs different tests (test_abci_apps, test_abci_cli, test_apps)
# This workflow runs on every push to master or release branch and every pull requests
@@ -43,7 +42,7 @@ jobs:
arch: ${{ matrix.goarch }}
- name: install-gcc
run: sudo apt-get update -qq && sudo apt-get install -qq --yes gcc-10-arm-linux-gnueabi g++-10-arm-linux-gnueabi
- if: "matrix.goarch == 'arm'"
+ if: "matrix.goarch == 'arm'"
- name: install
run: |
GOOS=${{ matrix.goos }} GOARCH=${{ matrix.goarch }} make build-binary
@@ -55,11 +54,11 @@ jobs:
needs: build
timeout-minutes: 5
steps:
- - uses: actions/setup-go@v2
+ - uses: actions/setup-go@v3
with:
go-version: "1.17"
- - uses: actions/checkout@v2.4.0
- - uses: technote-space/get-diff-action@v5
+ - uses: actions/checkout@v3
+ - uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/**.go
@@ -80,11 +79,11 @@ jobs:
needs: build
timeout-minutes: 5
steps:
- - uses: actions/setup-go@v2
+ - uses: actions/setup-go@v3
with:
go-version: "1.17"
- - uses: actions/checkout@v2.4.0
- - uses: technote-space/get-diff-action@v5
+ - uses: actions/checkout@v3
+ - uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/**.go
diff --git a/.github/workflows/docker.yml b/.github/workflows/docker.yml
index c8ff2e1b35..65f07c43ab 100644
--- a/.github/workflows/docker.yml
+++ b/.github/workflows/docker.yml
@@ -19,7 +19,7 @@ jobs:
platforms: all
- name: Set up Docker Build
- uses: docker/setup-buildx-action@v1.6.0
+ uses: docker/setup-buildx-action@v1.7.0
- name: Login to DockerHub
if: ${{ github.event_name != 'pull_request' }}
@@ -50,7 +50,7 @@ jobs:
suffix=${{ steps.suffix.outputs.result }}
- name: Publish to Docker Hub
- uses: docker/build-push-action@v2.9.0
+ uses: docker/build-push-action@v2.10.0
with:
context: .
file: ./DOCKER/Dockerfile
diff --git a/.github/workflows/e2e-manual.yml b/.github/workflows/e2e-manual.yml
index acff89af10..bab3fcf62d 100644
--- a/.github/workflows/e2e-manual.yml
+++ b/.github/workflows/e2e-manual.yml
@@ -1,4 +1,5 @@
-# Manually run randomly generated E2E testnets (as nightly).
+# Runs randomly generated E2E testnets nightly on master
+# manually run e2e tests
name: e2e-manual
on:
workflow_dispatch:
@@ -10,16 +11,15 @@ jobs:
strategy:
fail-fast: false
matrix:
- p2p: ['legacy', 'new', 'hybrid']
group: ['00', '01', '02', '03']
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- - uses: actions/setup-go@v2
+ - uses: actions/setup-go@v3
with:
go-version: '1.17'
- - uses: actions/checkout@v2.4.0
+ - uses: actions/checkout@v3
- name: Build
working-directory: test/e2e
@@ -29,8 +29,8 @@ jobs:
- name: Generate testnets
working-directory: test/e2e
# When changing -g, also change the matrix groups above
- run: ./build/generator -g 4 -d networks/nightly/${{ matrix.p2p }} -p ${{ matrix.p2p }}
+ run: ./build/generator -g 4 -d networks/nightly/
- name: Run ${{ matrix.p2p }} p2p testnets
working-directory: test/e2e
- run: ./run-multiple.sh networks/nightly/${{ matrix.p2p }}/*-group${{ matrix.group }}-*.toml
+ run: ./run-multiple.sh networks/nightly/*-group${{ matrix.group }}-*.toml
diff --git a/.github/workflows/e2e-nightly-34x.yml b/.github/workflows/e2e-nightly-34x.yml
index 0160718359..ec92cb112b 100644
--- a/.github/workflows/e2e-nightly-34x.yml
+++ b/.github/workflows/e2e-nightly-34x.yml
@@ -6,7 +6,7 @@
name: e2e-nightly-34x
on:
- workflow_dispatch: # allow running workflow manually, in theory
+ workflow_dispatch: # allow running workflow manually, in theory
schedule:
- cron: '0 2 * * *'
@@ -21,11 +21,11 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- - uses: actions/setup-go@v2
+ - uses: actions/setup-go@v3
with:
go-version: '1.17'
- - uses: actions/checkout@v2.3.4
+ - uses: actions/checkout@v3
with:
ref: 'v0.34.x'
@@ -59,7 +59,7 @@ jobs:
SLACK_MESSAGE: Nightly E2E tests failed on v0.34.x
SLACK_FOOTER: ''
- e2e-nightly-success: # may turn this off once they seem to pass consistently
+ e2e-nightly-success: # may turn this off once they seem to pass consistently
needs: e2e-nightly-test
if: ${{ success() }}
runs-on: ubuntu-latest
diff --git a/.github/workflows/e2e-nightly-35x.yml b/.github/workflows/e2e-nightly-35x.yml
new file mode 100644
index 0000000000..c397ead9c0
--- /dev/null
+++ b/.github/workflows/e2e-nightly-35x.yml
@@ -0,0 +1,75 @@
+# Runs randomly generated E2E testnets nightly on v0.35.x.
+
+# !! If you change something in this file, you probably want
+# to update the e2e-nightly-master workflow as well!
+
+name: e2e-nightly-35x
+on:
+ schedule:
+ - cron: '0 2 * * *'
+
+jobs:
+ e2e-nightly-test:
+ # Run parallel jobs for the listed testnet groups (must match the
+ # ./build/generator -g flag)
+ strategy:
+ fail-fast: false
+ matrix:
+ p2p: ['legacy', 'new', 'hybrid']
+ group: ['00', '01', '02', '03']
+ runs-on: ubuntu-latest
+ timeout-minutes: 60
+ steps:
+ - uses: actions/setup-go@v3
+ with:
+ go-version: '1.17'
+
+ - uses: actions/checkout@v3
+ with:
+ ref: 'v0.35.x'
+
+ - name: Build
+ working-directory: test/e2e
+ # Run make jobs in parallel, since we can't run steps in parallel.
+ run: make -j2 docker generator runner tests
+
+ - name: Generate testnets
+ working-directory: test/e2e
+ # When changing -g, also change the matrix groups above
+ run: ./build/generator -g 4 -d networks/nightly/${{ matrix.p2p }} -p ${{ matrix.p2p }}
+
+ - name: Run ${{ matrix.p2p }} p2p testnets in group ${{ matrix.group }}
+ working-directory: test/e2e
+ run: ./run-multiple.sh networks/nightly/${{ matrix.p2p }}/*-group${{ matrix.group }}-*.toml
+
+ e2e-nightly-fail-2:
+ needs: e2e-nightly-test
+ if: ${{ failure() }}
+ runs-on: ubuntu-latest
+ steps:
+ - name: Notify Slack on failure
+ uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
+ env:
+ SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
+ SLACK_CHANNEL: tendermint-internal
+ SLACK_USERNAME: Nightly E2E Tests
+ SLACK_ICON_EMOJI: ':skull:'
+ SLACK_COLOR: danger
+ SLACK_MESSAGE: Nightly E2E tests failed on v0.35.x
+ SLACK_FOOTER: ''
+
+ e2e-nightly-success: # may turn this off once they seem to pass consistently
+ needs: e2e-nightly-test
+ if: ${{ success() }}
+ runs-on: ubuntu-latest
+ steps:
+ - name: Notify Slack on success
+ uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
+ env:
+ SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
+ SLACK_CHANNEL: tendermint-internal
+ SLACK_USERNAME: Nightly E2E Tests
+ SLACK_ICON_EMOJI: ':white_check_mark:'
+ SLACK_COLOR: good
+ SLACK_MESSAGE: Nightly E2E tests passed on v0.35.x
+ SLACK_FOOTER: ''
diff --git a/.github/workflows/e2e-nightly-master.yml b/.github/workflows/e2e-nightly-master.yml
index 30479bf8de..58ffa81c17 100644
--- a/.github/workflows/e2e-nightly-master.yml
+++ b/.github/workflows/e2e-nightly-master.yml
@@ -5,27 +5,26 @@
name: e2e-nightly-master
on:
- workflow_dispatch: # allow running workflow manually
+ workflow_dispatch: # allow running workflow manually
schedule:
- cron: '0 2 * * *'
jobs:
- e2e-nightly-test-2:
+ e2e-nightly-test:
# Run parallel jobs for the listed testnet groups (must match the
# ./build/generator -g flag)
strategy:
fail-fast: false
matrix:
- p2p: ['legacy', 'new', 'hybrid']
group: ['00', '01', '02', '03']
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- - uses: actions/setup-go@v2
+ - uses: actions/setup-go@v3
with:
go-version: '1.17'
- - uses: actions/checkout@v2.3.4
+ - uses: actions/checkout@v3
- name: Build
working-directory: test/e2e
@@ -35,14 +34,14 @@ jobs:
- name: Generate testnets
working-directory: test/e2e
# When changing -g, also change the matrix groups above
- run: ./build/generator -g 4 -d networks/nightly/${{ matrix.p2p }} -p ${{ matrix.p2p }}
+ run: ./build/generator -g 4 -d networks/nightly/
- - name: Run ${{ matrix.p2p }} p2p testnets in group ${{ matrix.group }}
+ - name: Run ${{ matrix.p2p }} p2p testnets
working-directory: test/e2e
- run: ./run-multiple.sh networks/nightly/${{ matrix.p2p }}/*-group${{ matrix.group }}-*.toml
+ run: ./run-multiple.sh networks/nightly/*-group${{ matrix.group }}-*.toml
e2e-nightly-fail-2:
- needs: e2e-nightly-test-2
+ needs: e2e-nightly-test
if: ${{ failure() }}
runs-on: ubuntu-latest
steps:
diff --git a/.github/workflows/e2e.yml b/.github/workflows/e2e.yml
index a44473e647..de13a03302 100644
--- a/.github/workflows/e2e.yml
+++ b/.github/workflows/e2e.yml
@@ -20,13 +20,13 @@ jobs:
env:
FULLNODE_PUBKEY_KEEP: false
steps:
- - uses: actions/setup-go@v2
+ - uses: actions/setup-go@v3
with:
go-version: '1.17'
- - uses: actions/checkout@v2.3.4
+ - uses: actions/checkout@v3
with:
submodules: true
- - uses: technote-space/get-diff-action@v5
+ - uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/**.go
diff --git a/.github/workflows/fuzz-nightly.yml b/.github/workflows/fuzz-nightly.yml
index 57a0962084..0fcab9ae5b 100644
--- a/.github/workflows/fuzz-nightly.yml
+++ b/.github/workflows/fuzz-nightly.yml
@@ -13,34 +13,19 @@ jobs:
fuzz-nightly-test:
runs-on: ubuntu-latest
steps:
- - uses: actions/setup-go@v2
+ - uses: actions/setup-go@v3
with:
go-version: '1.17'
- - uses: actions/checkout@v2.3.4
+ - uses: actions/checkout@v3
- name: Install go-fuzz
working-directory: test/fuzz
- run: go get -u github.com/dvyukov/go-fuzz/go-fuzz github.com/dvyukov/go-fuzz/go-fuzz-build
+ run: go install github.com/dvyukov/go-fuzz/go-fuzz@latest github.com/dvyukov/go-fuzz/go-fuzz-build@latest
- - name: Fuzz mempool-v1
+ - name: Fuzz mempool
working-directory: test/fuzz
- run: timeout -s SIGINT --preserve-status 10m make fuzz-mempool-v1
- continue-on-error: true
-
- - name: Fuzz mempool-v0
- working-directory: test/fuzz
- run: timeout -s SIGINT --preserve-status 10m make fuzz-mempool-v0
- continue-on-error: true
-
- - name: Fuzz p2p-addrbook
- working-directory: test/fuzz
- run: timeout -s SIGINT --preserve-status 10m make fuzz-p2p-addrbook
- continue-on-error: true
-
- - name: Fuzz p2p-pex
- working-directory: test/fuzz
- run: timeout -s SIGINT --preserve-status 10m make fuzz-p2p-pex
+ run: timeout -s SIGINT --preserve-status 10m make fuzz-mempool
continue-on-error: true
- name: Fuzz p2p-sc
@@ -54,14 +39,14 @@ jobs:
continue-on-error: true
- name: Archive crashers
- uses: actions/upload-artifact@v2
+ uses: actions/upload-artifact@v3
with:
name: crashers
path: test/fuzz/**/crashers
retention-days: 3
- name: Archive suppressions
- uses: actions/upload-artifact@v2
+ uses: actions/upload-artifact@v3
with:
name: suppressions
path: test/fuzz/**/suppressions
diff --git a/.github/workflows/jepsen.yml b/.github/workflows/jepsen.yml
index 0e358af6e4..04e599564a 100644
--- a/.github/workflows/jepsen.yml
+++ b/.github/workflows/jepsen.yml
@@ -46,7 +46,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout the Jepsen repository
- uses: actions/checkout@v2.3.4
+ uses: actions/checkout@v3
with:
repository: 'tendermint/jepsen'
@@ -58,7 +58,7 @@ jobs:
run: docker exec -i jepsen-control bash -c 'source /root/.bashrc; cd /jepsen/tendermint; lein run test --nemesis ${{ github.event.inputs.nemesis }} --workload ${{ github.event.inputs.workload }} --concurrency ${{ github.event.inputs.concurrency }} --tendermint-url ${{ github.event.inputs.tendermintUrl }} --merkleeyes-url ${{ github.event.inputs.merkleeyesUrl }} --time-limit ${{ github.event.inputs.timeLimit }} ${{ github.event.inputs.dupOrSuperByzValidators }}'
- name: Archive results
- uses: actions/upload-artifact@v2
+ uses: actions/upload-artifact@v3
with:
name: results
path: tendermint/store/latest
diff --git a/.github/workflows/linkchecker.yml b/.github/workflows/linkchecker.yml
index 6633c2c441..e2ba808617 100644
--- a/.github/workflows/linkchecker.yml
+++ b/.github/workflows/linkchecker.yml
@@ -6,7 +6,7 @@ jobs:
markdown-link-check:
runs-on: ubuntu-latest
steps:
- - uses: actions/checkout@v2.3.4
- - uses: gaurav-nelson/github-action-markdown-link-check@1.0.13
+ - uses: actions/checkout@v3
+ - uses: creachadair/github-action-markdown-link-check@master
with:
folder-path: "docs"
diff --git a/.github/workflows/lint.yml b/.github/workflows/lint.yml
index 3f99b9f808..cfe8dde29b 100644
--- a/.github/workflows/lint.yml
+++ b/.github/workflows/lint.yml
@@ -1,7 +1,11 @@
-name: Lint
-# Lint runs golangci-lint over the entire Tendermint repository
-# This workflow is run on every pull request and push to master
-# The `golangci` job will pass without running if no *.{go, mod, sum} files have been modified.
+name: Golang Linter
+# Lint runs golangci-lint over the entire Tendermint repository.
+#
+# This workflow is run on every pull request and push to master.
+#
+# The `golangci` job will pass without running if no *.{go, mod, sum}
+# files have been modified.
+
on:
pull_request:
push:
@@ -13,13 +17,13 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 8
steps:
- - uses: actions/checkout@v2.4.0
+ - uses: actions/checkout@v3
with:
submodules: true
- - uses: actions/setup-go@v2
+ - uses: actions/setup-go@v3
with:
go-version: '^1.17'
- - uses: technote-space/get-diff-action@v5
+ - uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/**.go
@@ -32,7 +36,9 @@ jobs:
- uses: golangci/golangci-lint-action@v3.1.0
with:
- # Required: the version of golangci-lint is required and must be specified without patch version: we always use the latest patch version.
+ # Required: the version of golangci-lint is required and
+ # must be specified without patch version: we always use the
+ # latest patch version.
version: v1.45
args: --timeout 10m
github-token: ${{ secrets.github_token }}
diff --git a/.github/workflows/linter.yml b/.github/workflows/linter.yml
index 628b1af69e..badae8c1f8 100644
--- a/.github/workflows/linter.yml
+++ b/.github/workflows/linter.yml
@@ -1,4 +1,4 @@
-name: Lint
+name: Markdown Linter
on:
push:
branches:
@@ -19,7 +19,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
- uses: actions/checkout@v2.4.0
+ uses: actions/checkout@v3
- name: Lint Code Base
uses: docker://github/super-linter:v4
env:
diff --git a/.github/workflows/markdown-links.yml b/.github/workflows/markdown-links.yml
new file mode 100644
index 0000000000..7af7e3ce90
--- /dev/null
+++ b/.github/workflows/markdown-links.yml
@@ -0,0 +1,23 @@
+name: Check Markdown links
+
+on:
+ push:
+ branches:
+ - master
+ pull_request:
+ branches: [master]
+
+jobs:
+ markdown-link-check:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v3
+ - uses: technote-space/get-diff-action@v6
+ with:
+ PATTERNS: |
+ **/**.md
+ - uses: creachadair/github-action-markdown-link-check@master
+ with:
+ check-modified-files-only: 'yes'
+ config-file: '.md-link-check.json'
+ if: env.GIT_DIFF
diff --git a/.github/workflows/proto-docker.yml b/.github/workflows/proto-docker.yml
deleted file mode 100644
index 340a1b78b2..0000000000
--- a/.github/workflows/proto-docker.yml
+++ /dev/null
@@ -1,51 +0,0 @@
-name: Build & Push TM Proto Builder
-on:
- pull_request:
- paths:
- - "tools/proto/*"
- push:
- branches:
- - master
- paths:
- - "tools/proto/*"
- schedule:
- # run this job once a month to recieve any go or buf updates
- - cron: "* * 1 * *"
-
-jobs:
- build:
- runs-on: ubuntu-latest
- steps:
- - uses: actions/checkout@v2.3.4
- - name: Prepare
- id: prep
- run: |
- DOCKER_IMAGE=tendermintdev/docker-build-proto
- VERSION=noop
- if [[ $GITHUB_REF == refs/tags/* ]]; then
- VERSION=${GITHUB_REF#refs/tags/}
- elif [[ $GITHUB_REF == refs/heads/* ]]; then
- VERSION=$(echo ${GITHUB_REF#refs/heads/} | sed -r 's#/+#-#g')
- if [ "${{ github.event.repository.default_branch }}" = "$VERSION" ]; then
- VERSION=latest
- fi
- fi
- TAGS="${DOCKER_IMAGE}:${VERSION}"
- echo ::set-output name=tags::${TAGS}
-
- - name: Set up Docker Buildx
- uses: docker/setup-buildx-action@v1.6.0
-
- - name: Login to DockerHub
- uses: docker/login-action@v1.14.1
- with:
- username: ${{ secrets.DOCKERHUB_USERNAME }}
- password: ${{ secrets.DOCKERHUB_TOKEN }}
-
- - name: Publish to Docker Hub
- uses: docker/build-push-action@v2.9.0
- with:
- context: ./tools/proto
- file: ./tools/proto/Dockerfile
- push: ${{ github.event_name != 'pull_request' }}
- tags: ${{ steps.prep.outputs.tags }}
diff --git a/.github/workflows/proto-lint.yml b/.github/workflows/proto-lint.yml
new file mode 100644
index 0000000000..b1fbeab9df
--- /dev/null
+++ b/.github/workflows/proto-lint.yml
@@ -0,0 +1,21 @@
+name: Protobuf Lint
+on:
+ pull_request:
+ paths:
+ - 'proto/**'
+ push:
+ branches:
+ - master
+ paths:
+ - 'proto/**'
+
+jobs:
+ lint:
+ runs-on: ubuntu-latest
+ timeout-minutes: 5
+ steps:
+ - uses: actions/checkout@v3
+ - uses: bufbuild/buf-setup-action@v1.4.0
+ - uses: bufbuild/buf-lint-action@v1
+ with:
+ input: 'proto'
diff --git a/.github/workflows/proto.yml b/.github/workflows/proto.yml
deleted file mode 100644
index 2eeb3dfd55..0000000000
--- a/.github/workflows/proto.yml
+++ /dev/null
@@ -1,23 +0,0 @@
-name: Protobuf
-# Protobuf runs buf (https://buf.build/) lint and check-breakage
-# This workflow is only run when a .proto file has been modified
-on:
- workflow_dispatch: # allow running workflow manually
- pull_request:
- paths:
- - "**.proto"
-jobs:
- proto-lint:
- runs-on: ubuntu-latest
- timeout-minutes: 4
- steps:
- - uses: actions/checkout@v2.3.4
- - name: lint
- run: make proto-lint
- proto-breakage:
- runs-on: ubuntu-latest
- timeout-minutes: 4
- steps:
- - uses: actions/checkout@v2.3.4
- - name: check-breakage
- run: "make BASE_BRANCH='${{ github.base_ref }}' proto-check-breaking-ci"
diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml
index 2939c567b0..fdd466fd57 100644
--- a/.github/workflows/release.yml
+++ b/.github/workflows/release.yml
@@ -8,11 +8,11 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
- uses: actions/checkout@v2.3.4
+ uses: actions/checkout@v3
with:
fetch-depth: 0
- - uses: actions/setup-go@v2
+ - uses: actions/setup-go@v3
with:
go-version: '1.17'
@@ -23,11 +23,13 @@ jobs:
version: latest
args: build --skip-validate # skip validate skips initial sanity checks in order to be able to fully run
+ - run: echo https://github.com/tendermint/tendermint/blob/${GITHUB_REF#refs/tags/}/CHANGELOG.md#${GITHUB_REF#refs/tags/} > ../release_notes.md
+
- name: Release
uses: goreleaser/goreleaser-action@v2
if: startsWith(github.ref, 'refs/tags/')
with:
version: latest
- args: release --rm-dist
+ args: release --rm-dist --release-notes=../release_notes.md
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
diff --git a/.github/workflows/stale.yml b/.github/workflows/stale.yml
index 1109f09c1c..4089abfbc3 100644
--- a/.github/workflows/stale.yml
+++ b/.github/workflows/stale.yml
@@ -7,7 +7,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- - uses: actions/stale@v4
+ - uses: actions/stale@v5
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-pr-message: "This pull request has been automatically marked as stale because it has not had
diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml
index d389ddbf8e..2fcde2c30a 100644
--- a/.github/workflows/tests.yml
+++ b/.github/workflows/tests.yml
@@ -16,11 +16,11 @@ jobs:
matrix:
part: ["00", "01", "02", "03", "04", "05"]
steps:
- - uses: actions/setup-go@v2
+ - uses: actions/setup-go@v3
with:
go-version: "1.17"
- - uses: actions/checkout@v2.3.4
- - uses: technote-space/get-diff-action@v5
+ - uses: actions/checkout@v3
+ - uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/**.go
@@ -35,7 +35,7 @@ jobs:
run: |
make test-group-${{ matrix.part }} NUM_SPLIT=6
if: env.GIT_DIFF
- - uses: actions/upload-artifact@v2
+ - uses: actions/upload-artifact@v3
with:
name: "${{ github.sha }}-${{ matrix.part }}-coverage"
path: ./build/${{ matrix.part }}.profile.out
@@ -44,8 +44,8 @@ jobs:
runs-on: ubuntu-latest
needs: tests
steps:
- - uses: actions/checkout@v2.4.0
- - uses: technote-space/get-diff-action@v5
+ - uses: actions/checkout@v3
+ - uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/**.go
@@ -53,26 +53,26 @@ jobs:
go.mod
go.sum
Makefile
- - uses: actions/download-artifact@v2
+ - uses: actions/download-artifact@v3
with:
name: "${{ github.sha }}-00-coverage"
if: env.GIT_DIFF
- - uses: actions/download-artifact@v2
+ - uses: actions/download-artifact@v3
with:
name: "${{ github.sha }}-01-coverage"
if: env.GIT_DIFF
- - uses: actions/download-artifact@v2
+ - uses: actions/download-artifact@v3
with:
name: "${{ github.sha }}-02-coverage"
if: env.GIT_DIFF
- - uses: actions/download-artifact@v2
+ - uses: actions/download-artifact@v3
with:
name: "${{ github.sha }}-03-coverage"
if: env.GIT_DIFF
- run: |
cat ./*profile.out | grep -v "mode: set" >> coverage.txt
if: env.GIT_DIFF
- - uses: codecov/codecov-action@v2.1.0
+ - uses: codecov/codecov-action@v3.1.0
with:
file: ./coverage.txt
if: env.GIT_DIFF
diff --git a/.gitignore b/.gitignore
index 1846a354cc..7cfe62dece 100644
--- a/.gitignore
+++ b/.gitignore
@@ -55,3 +55,11 @@ test/fuzz/**/corpus
test/fuzz/**/crashers
test/fuzz/**/suppressions
test/fuzz/**/*.zip
+proto/spec/**/*.pb.go
+*.aux
+*.bbl
+*.blg
+*.log
+*.pdf
+*.gz
+*.dvi
diff --git a/.markdownlint.yml b/.markdownlint.yml
new file mode 100644
index 0000000000..80e3be4edb
--- /dev/null
+++ b/.markdownlint.yml
@@ -0,0 +1,11 @@
+default: true
+MD001: false
+MD007: {indent: 4}
+MD013: false
+MD024: {siblings_only: true}
+MD025: false
+MD033: false
+MD036: false
+MD010: false
+MD012: false
+MD028: false
diff --git a/.md-link-check.json b/.md-link-check.json
new file mode 100644
index 0000000000..6f47fa2c94
--- /dev/null
+++ b/.md-link-check.json
@@ -0,0 +1,6 @@
+{
+ "retryOn429": true,
+ "retryCount": 5,
+ "fallbackRetryDelay": "30s",
+ "aliveStatusCodes": [200, 206, 503]
+}
diff --git a/CHANGELOG_PENDING.md b/CHANGELOG_PENDING.md
index 7a51fb59ce..442cca2165 100644
--- a/CHANGELOG_PENDING.md
+++ b/CHANGELOG_PENDING.md
@@ -12,16 +12,77 @@ Special thanks to external contributors on this release:
- CLI/RPC/Config
+ - [rpc] \#7121 Remove the deprecated gRPC interface to the RPC service. (@creachadair)
+ - [blocksync] \#7159 Remove support for disabling blocksync in any circumstance. (@tychoish)
+ - [mempool] \#7171 Remove legacy mempool implementation. (@tychoish)
+ - [rpc] \#7575 Rework how RPC responses are written back via HTTP. (@creachadair)
+ - [rpc] \#7713 Remove unused options for websocket clients. (@creachadair)
+ - [config] \#7930 Add new event subscription options and defaults. (@creachadair)
+ - [rpc] \#7982 Add new Events interface and deprecate Subscribe. (@creachadair)
+ - [cli] \#8081 make the reset command safe to use by intoducing `reset-state` command. Fixed by \#8259. (@marbar3778, @cmwaters)
+ - [config] \#8222 default indexer configuration to null. (@creachadair)
+
- Apps
+ - [tendermint/spec] \#7804 Migrate spec from [spec repo](https://github.com/tendermint/spec).
+ - [abci] \#7984 Remove the locks preventing concurrent use of ABCI applications by Tendermint. (@tychoish)
+
- P2P Protocol
+ - [p2p] \#7035 Remove legacy P2P routing implementation and associated configuration options. (@tychoish)
+ - [p2p] \#7265 Peer manager reduces peer score for each failed dial attempts for peers that have not successfully dialed. (@tychoish)
+ - [p2p] [\#7594](https://github.com/tendermint/tendermint/pull/7594) always advertise self, to enable mutual address discovery. (@altergui)
+
- Go API
+ - [rpc] \#7474 Remove the "URI" RPC client. (@creachadair)
+ - [libs/pubsub] \#7451 Internalize the pubsub packages. (@creachadair)
+ - [libs/sync] \#7450 Internalize and remove the library. (@creachadair)
+ - [libs/async] \#7449 Move library to internal. (@creachadair)
+ - [pubsub] \#7231 Remove unbuffered subscriptions and rework the Subscription interface. (@creachadair)
+ - [eventbus] \#7231 Move the EventBus type to the internal/eventbus package. (@creachadair)
+ - [blocksync] \#7046 Remove v2 implementation of the blocksync service and recactor, which was disabled in the previous release. (@tychoish)
+ - [p2p] \#7064 Remove WDRR queue implementation. (@tychoish)
+ - [config] \#7169 `WriteConfigFile` now returns an error. (@tychoish)
+ - [libs/service] \#7288 Remove SetLogger method on `service.Service` interface. (@tychoish)
+ - [abci/client] \#7607 Simplify client interface (removes most "async" methods). (@creachadair)
+ - [libs/json] \#7673 Remove the libs/json (tmjson) library. (@creachadair)
+ - [crypto] \#8412 \#8432 Remove `crypto/tmhash` package in favor of small functions in `crypto` package and cleanup of unused functions. (@tychoish)
+
- Blockchain Protocol
### FEATURES
+- [rpc] [\#7270](https://github.com/tendermint/tendermint/pull/7270) Add `header` and `header_by_hash` RPC Client queries. (@fedekunze)
+- [rpc] [\#7701] Add `ApplicationInfo` to `status` rpc call which contains the application version. (@jonasbostoen)
+- [cli] [#7033](https://github.com/tendermint/tendermint/pull/7033) Add a `rollback` command to rollback to the previous tendermint state in the event of non-determinstic app hash or reverting an upgrade.
+- [mempool, rpc] \#7041 Add removeTx operation to the RPC layer. (@tychoish)
+- [consensus] \#7354 add a new `synchrony` field to the `ConsensusParameter` struct for controlling the parameters of the proposer-based timestamp algorithm. (@williambanfield)
+- [consensus] \#7376 Update the proposal logic per the Propose-based timestamps specification so that the proposer will wait for the previous block time to occur before proposing the next block. (@williambanfield)
+- [consensus] \#7391 Use the proposed block timestamp as the proposal timestamp. Update the block validation logic to ensure that the proposed block's timestamp matches the timestamp in the proposal message. (@williambanfield)
+- [consensus] \#7415 Update proposal validation logic to Prevote nil if a proposal does not meet the conditions for Timelyness per the proposer-based timestamp specification. (@anca)
+- [consensus] \#7382 Update block validation to no longer require the block timestamp to be the median of the timestamps of the previous commit. (@anca)
+- [consensus] \#7711 Use the proposer timestamp for the first height instead of the genesis time. Chains will still start consensus at the genesis time. (@anca)
+- [cli] \#8281 Add a tool to update old config files to the latest version. (@creachadair)
+
### IMPROVEMENTS
+- [internal/protoio] \#7325 Optimized `MarshalDelimited` by inlining the common case and using a `sync.Pool` in the worst case. (@odeke-em)
+- [consensus] \#6969 remove logic to 'unlock' a locked block.
+- [evidence] \#7700 Evidence messages contain single Evidence instead of EvidenceList (@jmalicevic)
+- [evidence] \#7802 Evidence pool emits events when evidence is validated and updates a metric when the number of evidence in the evidence pool changes. (@jmalicevic)
+- [pubsub] \#7319 Performance improvements for the event query API (@creachadair)
+- [node] \#7521 Define concrete type for seed node implementation (@spacech1mp)
+- [rpc] \#7612 paginate mempool /unconfirmed_txs rpc endpoint (@spacech1mp)
+- [light] [\#7536](https://github.com/tendermint/tendermint/pull/7536) rpc /status call returns info about the light client (@jmalicevic)
+- [types] \#7765 Replace EvidenceData with EvidenceList to avoid unnecessary nesting of evidence fields within a block. (@jmalicevic)
+
### BUG FIXES
+
+- fix: assignment copies lock value in `BitArray.UnmarshalJSON()` (@lklimek)
+- [light] \#7640 Light Client: fix absence proof verification (@ashcherbakov)
+- [light] \#7641 Light Client: fix querying against the latest height (@ashcherbakov)
+- [cli] [#7837](https://github.com/tendermint/tendermint/pull/7837) fix app hash in state rollback. (@yihuang)
+- [cli] \#8276 scmigrate: ensure target key is correctly renamed. (@creachadair)
+- [cli] \#8294 keymigrate: ensure block hash keys are correctly translated. (@creachadair)
+- [cli] \#8352 keymigrate: ensure transaction hash keys are correctly translated. (@creachadair)
diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md
index 8c5a992032..ec1477adcc 100644
--- a/CODE_OF_CONDUCT.md
+++ b/CODE_OF_CONDUCT.md
@@ -20,7 +20,7 @@ This code of conduct applies to all projects run by the Tendermint/COSMOS team a
* Please keep unstructured critique to a minimum. If you have solid ideas you want to experiment with, make a fork and see how it works.
-* We will exclude you from interaction if you insult, demean or harass anyone. That is not welcome behaviour. We interpret the term “harassment” as including the definition in the [Citizen Code of Conduct](http://citizencodeofconduct.org/); if you have any lack of clarity about what might be included in that concept, please read their definition. In particular, we don’t tolerate behavior that excludes people in socially marginalized groups.
+* We will exclude you from interaction if you insult, demean or harass anyone. That is not welcome behaviour. We interpret the term “harassment” as including the definition in the [Citizen Code of Conduct](https://github.com/stumpsyn/policies/blob/master/citizen_code_of_conduct.md); if you have any lack of clarity about what might be included in that concept, please read their definition. In particular, we don’t tolerate behavior that excludes people in socially marginalized groups.
* Private harassment is also unacceptable. No matter who you are, if you feel you have been or are being harassed or made uncomfortable by a community member, please contact one of the channel admins or the person mentioned above immediately. Whether you’re a regular contributor or a newcomer, we care about making this community a safe place for you and we’ve got your back.
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 33b8cf6a78..bfa56bea64 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -105,11 +105,33 @@ specify exactly the dependency you want to update, eg.
## Protobuf
-We use [Protocol Buffers](https://developers.google.com/protocol-buffers) along with [gogoproto](https://github.com/gogo/protobuf) to generate code for use across Tendermint Core.
+We use [Protocol Buffers](https://developers.google.com/protocol-buffers) along
+with [`gogoproto`](https://github.com/gogo/protobuf) to generate code for use
+across Tendermint Core.
-For linting, checking breaking changes and generating proto stubs, we use [buf](https://buf.build/). If you would like to run linting and check if the changes you have made are breaking then you will need to have docker running locally. Then the linting cmd will be `make proto-lint` and the breaking changes check will be `make proto-check-breaking`.
+To generate proto stubs, lint, and check protos for breaking changes, you will
+need to install [buf](https://buf.build/) and `gogoproto`. Then, from the root
+of the repository, run:
-We use [Docker](https://www.docker.com/) to generate the protobuf stubs. To generate the stubs yourself, make sure docker is running then run `make proto-gen`.
+```bash
+# Lint all of the .proto files in proto/tendermint
+make proto-lint
+
+# Check if any of your local changes (prior to committing to the Git repository)
+# are breaking
+make proto-check-breaking
+
+# Generate Go code from the .proto files in proto/tendermint
+make proto-gen
+```
+
+To automatically format `.proto` files, you will need
+[`clang-format`](https://clang.llvm.org/docs/ClangFormat.html) installed. Once
+installed, you can run:
+
+```bash
+make proto-format
+```
### Visual Studio Code
@@ -227,150 +249,6 @@ Fixes #nnnn
Each PR should have one commit once it lands on `master`; this can be accomplished by using the "squash and merge" button on Github. Be sure to edit your commit message, though!
-### Release procedure
-
-#### A note about backport branches
-Tendermint's `master` branch is under active development.
-Releases are specified using tags and are built from long-lived "backport" branches.
-Each release "line" (e.g. 0.34 or 0.33) has its own long-lived backport branch,
-and the backport branches have names like `v0.34.x` or `v0.33.x`
-(literally, `x`; it is not a placeholder in this case).
-
-As non-breaking changes land on `master`, they should also be backported (cherry-picked)
-to these backport branches.
-
-We use Mergify's [backport feature](https://mergify.io/features/backports) to automatically backport
-to the needed branch. There should be a label for any backport branch that you'll be targeting.
-To notify the bot to backport a pull request, mark the pull request with
-the label `S:backport-to-`.
-Once the original pull request is merged, the bot will try to cherry-pick the pull request
-to the backport branch. If the bot fails to backport, it will open a pull request.
-The author of the original pull request is responsible for solving the conflicts and
-merging the pull request.
-
-#### Creating a backport branch
-
-If this is the first release candidate for a major release, you get to have the honor of creating
-the backport branch!
-
-Note that, after creating the backport branch, you'll also need to update the tags on `master`
-so that `go mod` is able to order the branches correctly. You should tag `master` with a "dev" tag
-that is "greater than" the backport branches tags. See #6072 for more context.
-
-In the following example, we'll assume that we're making a backport branch for
-the 0.35.x line.
-
-1. Start on `master`
-2. Create the backport branch:
- `git checkout -b v0.35.x`
-3. Go back to master and tag it as the dev branch for the _next_ major release and push it back up:
- `git tag -a v0.36.0-dev; git push v0.36.0-dev`
-4. Create a new workflow to run the e2e nightlies for this backport branch.
- (See https://github.com/tendermint/tendermint/blob/master/.github/workflows/e2e-nightly-34x.yml
- for an example.)
-
-#### Release candidates
-
-Before creating an official release, especially a major release, we may want to create a
-release candidate (RC) for our friends and partners to test out. We use git tags to
-create RCs, and we build them off of backport branches.
-
-Tags for RCs should follow the "standard" release naming conventions, with `-rcX` at the end
-(for example, `v0.35.0-rc0`).
-
-(Note that branches and tags _cannot_ have the same names, so it's important that these branches
-have distinct names from the tags/release names.)
-
-If this is the first RC for a major release, you'll have to make a new backport branch (see above).
-Otherwise:
-
-1. Start from the backport branch (e.g. `v0.35.x`).
-1. Run the integration tests and the e2e nightlies
- (which can be triggered from the Github UI;
- e.g., https://github.com/tendermint/tendermint/actions/workflows/e2e-nightly-34x.yml).
-1. Prepare the changelog:
- - Move the changes included in `CHANGELOG_PENDING.md` into `CHANGELOG.md`.
- - Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
- all PRs
- - Ensure that UPGRADING.md is up-to-date and includes notes on any breaking changes
- or other upgrading flows.
- - Bump TMVersionDefault version in `version.go`
- - Bump P2P and block protocol versions in `version.go`, if necessary
- - Bump ABCI protocol version in `version.go`, if necessary
-1. Open a PR with these changes against the backport branch.
-1. Once these changes have landed on the backport branch, be sure to pull them back down locally.
-2. Once you have the changes locally, create the new tag, specifying a name and a tag "message":
- `git tag -a v0.35.0-rc0 -m "Release Candidate v0.35.0-rc0`
-3. Push the tag back up to origin:
- `git push origin v0.35.0-rc0`
- Now the tag should be available on the repo's releases page.
-4. Future RCs will continue to be built off of this branch.
-
-Note that this process should only be used for "true" RCs--
-release candidates that, if successful, will be the next release.
-For more experimental "RCs," create a new, short-lived branch and tag that instead.
-
-#### Major release
-
-This major release process assumes that this release was preceded by release candidates.
-If there were no release candidates, begin by creating a backport branch, as described above.
-
-1. Start on the backport branch (e.g. `v0.35.x`)
-2. Run integration tests and the e2e nightlies.
-3. Prepare the release:
- - "Squash" changes from the changelog entries for the RCs into a single entry,
- and add all changes included in `CHANGELOG_PENDING.md`.
- (Squashing includes both combining all entries, as well as removing or simplifying
- any intra-RC changes. It may also help to alphabetize the entries by package name.)
- - Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
- all PRs
- - Ensure that UPGRADING.md is up-to-date and includes notes on any breaking changes
- or other upgrading flows.
- - Bump TMVersionDefault version in `version.go`
- - Bump P2P and block protocol versions in `version.go`, if necessary
- - Bump ABCI protocol version in `version.go`, if necessary
-4. Open a PR with these changes against the backport branch.
-5. Once these changes are on the backport branch, push a tag with prepared release details.
- This will trigger the actual release `v0.35.0`.
- - `git tag -a v0.35.0 -m 'Release v0.35.0'`
- - `git push origin v0.35.0`
-7. Make sure that `master` is updated with the latest `CHANGELOG.md`, `CHANGELOG_PENDING.md`, and `UPGRADING.md`.
-8. Add the release to the documentation site generator config (see
- [DOCS_README.md](./docs/DOCS_README.md) for more details). In summary:
- - Start on branch `master`.
- - Add a new line at the bottom of [`docs/versions`](./docs/versions) to
- ensure the newest release is the default for the landing page.
- - Add a new entry to `themeConfig.versions` in
- [`docs/.vuepress/config.js`](./docs/.vuepress/config.js) to include the
- release in the dropdown versions menu.
-
-#### Minor release (point releases)
-
-Minor releases are done differently from major releases: They are built off of long-lived backport branches, rather than from master.
-As non-breaking changes land on `master`, they should also be backported (cherry-picked) to these backport branches.
-
-Minor releases don't have release candidates by default, although any tricky changes may merit a release candidate.
-
-To create a minor release:
-
-1. Checkout the long-lived backport branch: `git checkout v0.35.x`
-2. Run integration tests (`make test_integrations`) and the nightlies.
-3. Check out a new branch and prepare the release:
- - Copy `CHANGELOG_PENDING.md` to top of `CHANGELOG.md`
- - Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for all issues
- - Run `bash ./scripts/authors.sh` to get a list of authors since the latest release, and add the GitHub aliases of external contributors to the top of the CHANGELOG. To lookup an alias from an email, try `bash ./scripts/authors.sh `
- - Reset the `CHANGELOG_PENDING.md`
- - Bump the ABCI version number, if necessary.
- (Note that ABCI follows semver, and that ABCI versions are the only versions
- which can change during minor releases, and only field additions are valid minor changes.)
-4. Open a PR with these changes that will land them back on `v0.35.x`
-5. Once this change has landed on the backport branch, make sure to pull it locally, then push a tag.
- - `git tag -a v0.35.1 -m 'Release v0.35.1'`
- - `git push origin v0.35.1`
-6. Create a pull request back to master with the CHANGELOG & version changes from the latest release.
- - Remove all `R:minor` labels from the pull requests that were included in the release.
- - Do not merge the backport branch into master.
-
## Testing
### Unit tests
diff --git a/DOCKER/.gitignore b/DOCKER/.gitignore
deleted file mode 100644
index 9059c68485..0000000000
--- a/DOCKER/.gitignore
+++ /dev/null
@@ -1 +0,0 @@
-tendermint
diff --git a/DOCKER/Dockerfile.build_c-amazonlinux b/DOCKER/Dockerfile.build_c-amazonlinux
deleted file mode 100644
index 6ec9d539c6..0000000000
--- a/DOCKER/Dockerfile.build_c-amazonlinux
+++ /dev/null
@@ -1,27 +0,0 @@
-FROM amazonlinux:2
-
-RUN yum -y update && \
- yum -y install wget
-
-RUN wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm && \
- rpm -ivh epel-release-latest-7.noarch.rpm
-
-RUN yum -y groupinstall "Development Tools"
-RUN yum -y install leveldb-devel which
-
-ENV GOVERSION=1.16.5
-
-RUN cd /tmp && \
- wget https://dl.google.com/go/go${GOVERSION}.linux-amd64.tar.gz && \
- tar -C /usr/local -xf go${GOVERSION}.linux-amd64.tar.gz && \
- mkdir -p /go/src && \
- mkdir -p /go/bin
-
-ENV PATH=$PATH:/usr/local/go/bin:/go/bin
-ENV GOBIN=/go/bin
-ENV GOPATH=/go/src
-
-RUN mkdir -p /tenderdash
-WORKDIR /tenderdash
-
-CMD ["/usr/bin/make", "build", "TENDERMINT_BUILD_OPTIONS=cleveldb"]
diff --git a/DOCKER/Dockerfile.testing b/DOCKER/Dockerfile.testing
deleted file mode 100644
index 7f86ee1800..0000000000
--- a/DOCKER/Dockerfile.testing
+++ /dev/null
@@ -1,16 +0,0 @@
-FROM golang:latest
-
-# Grab deps (jq, hexdump, xxd, killall)
-RUN apt-get update && \
- apt-get install -y --no-install-recommends \
- jq bsdmainutils vim-common psmisc netcat
-
-# Add testing deps for curl
-RUN echo 'deb http://httpredir.debian.org/debian testing main non-free contrib' >> /etc/apt/sources.list && \
- apt-get update && \
- apt-get install -y --no-install-recommends curl
-
-VOLUME /go
-
-EXPOSE 26656
-EXPOSE 26657
diff --git a/DOCKER/Makefile b/DOCKER/Makefile
deleted file mode 100644
index 082e52225e..0000000000
--- a/DOCKER/Makefile
+++ /dev/null
@@ -1,13 +0,0 @@
-build:
- @sh -c "'$(CURDIR)/build.sh'"
-
-push:
- @sh -c "'$(CURDIR)/push.sh'"
-
-build_testing:
- docker build --tag dashpay/tenderdash:testing -f ./Dockerfile.testing ..
-
-build_amazonlinux_buildimage:
- docker build -t "dashpay/tenderdash:build_c-amazonlinux" -f Dockerfile.build_c-amazonlinux ..
-
-.PHONY: build push build_testing build_amazonlinux_buildimage
diff --git a/DOCKER/README.md b/DOCKER/README.md
index b670a06d4e..671b646ad0 100644
--- a/DOCKER/README.md
+++ b/DOCKER/README.md
@@ -8,7 +8,7 @@ Official releases can be found [here](https://github.com/tendermint/tendermint/r
The Dockerfile for tendermint is not expected to change in the near future. The master file used for all builds can be found [here](https://raw.githubusercontent.com/tendermint/tendermint/master/DOCKER/Dockerfile).
-Respective versioned files can be found (replace the Xs with the version number).
+Respective versioned files can be found at `https://raw.githubusercontent.com/tendermint/tendermint/vX.XX.XX/DOCKER/Dockerfile` (replace the Xs with the version number).
## Quick reference
diff --git a/DOCKER/build.sh b/DOCKER/build.sh
deleted file mode 100755
index 193deb3383..0000000000
--- a/DOCKER/build.sh
+++ /dev/null
@@ -1,20 +0,0 @@
-#!/usr/bin/env bash
-set -e
-
-# Get the tag from the version, or try to figure it out.
-if [ -z "$TAG" ]; then
- TAG=$(awk -F\" '/TMCoreSemVer =/ { print $2; exit }' < ../version/version.go)
-fi
-if [ -z "$TAG" ]; then
- echo "Please specify a tag."
- exit 1
-fi
-
-TAG_NO_PATCH=${TAG%.*}
-
-read -p "==> Build 3 docker images with the following tags (latest, $TAG, $TAG_NO_PATCH)? y/n" -n 1 -r
-echo
-if [[ $REPLY =~ ^[Yy]$ ]]
-then
- docker build -t "dashpay/tenderdash" -t "dashpay/tenderdash:$TAG" -t "dashpay/tenderdash:$TAG_NO_PATCH" ..
-fi
diff --git a/DOCKER/push.sh b/DOCKER/push.sh
deleted file mode 100755
index 5456967a7a..0000000000
--- a/DOCKER/push.sh
+++ /dev/null
@@ -1,22 +0,0 @@
-#!/usr/bin/env bash
-set -e
-
-# Get the tag from the version, or try to figure it out.
-if [ -z "$TAG" ]; then
- TAG=$(awk -F\" '/TMCoreSemVer =/ { print $2; exit }' < ../version/version.go)
-fi
-if [ -z "$TAG" ]; then
- echo "Please specify a tag."
- exit 1
-fi
-
-TAG_NO_PATCH=${TAG%.*}
-
-read -p "==> Push 3 docker images with the following tags (latest, $TAG, $TAG_NO_PATCH)? y/n" -n 1 -r
-echo
-if [[ $REPLY =~ ^[Yy]$ ]]
-then
- docker push "dashpay/tenderdash:latest"
- docker push "dashpay/tenderdash:$TAG"
- docker push "dashpay/tenderdash:$TAG_NO_PATCH"
-fi
diff --git a/Makefile b/Makefile
index 4c8f774b04..537c8d2491 100644
--- a/Makefile
+++ b/Makefile
@@ -109,34 +109,47 @@ $(BUILDDIR)/:
### Protobuf ###
###############################################################################
-proto-all: proto-gen proto-lint proto-check-breaking
-.PHONY: proto-all
+check-proto-deps:
+ifeq (,$(shell which buf))
+ $(error "buf is required for Protobuf building, linting and breakage checking. See https://docs.buf.build/installation for installation instructions.")
+endif
+ifeq (,$(shell which protoc-gen-gogofaster))
+ $(error "gogofaster plugin for protoc is required. Run 'go install github.com/gogo/protobuf/protoc-gen-gogofaster@latest' to install")
+endif
+.PHONY: check-proto-deps
-proto-gen:
- @echo "Generating Go packages for .proto files"
- @$(DOCKER_PROTO) sh ./scripts/protocgen.sh
+check-proto-format-deps:
+ifeq (,$(shell which clang-format))
+ $(error "clang-format is required for Protobuf formatting. See instructions for your platform on how to install it.")
+endif
+.PHONY: check-proto-format-deps
+
+proto-gen: check-proto-deps
+ @echo "Generating Protobuf files"
+ @buf generate
+ @mv ./proto/tendermint/abci/types.pb.go ./abci/types/
.PHONY: proto-gen
-proto-lint:
- @echo "Running lint checks for .proto files"
- @$(DOCKER_PROTO) buf lint --error-format=json
+# These targets are provided for convenience and are intended for local
+# execution only.
+proto-lint: check-proto-deps
+ @echo "Linting Protobuf files"
+ @buf lint
.PHONY: proto-lint
-proto-format:
- @echo "Formatting .proto files"
- @$(DOCKER_PROTO) find ./ -not -path "./third_party/*" -name '*.proto' -exec clang-format -i {} \;
+proto-format: check-proto-format-deps
+ @echo "Formatting Protobuf files"
+ @find . -name '*.proto' -path "./proto/*" -exec clang-format -i {} \;
.PHONY: proto-format
-proto-check-breaking:
- @echo "Checking for breaking changes in .proto files"
- @$(DOCKER_PROTO) buf breaking --against .git#branch=$(BASE_BRANCH)
+proto-check-breaking: check-proto-deps
+ @echo "Checking for breaking changes in Protobuf files against local branch"
+ @echo "Note: This is only useful if your changes have not yet been committed."
+ @echo " Otherwise read up on buf's \"breaking\" command usage:"
+ @echo " https://docs.buf.build/breaking/usage"
+ @buf breaking --against ".git"
.PHONY: proto-check-breaking
-proto-check-breaking-ci:
- @echo "Checking for breaking changes in .proto files"
- $(DOCKER_PROTO) buf breaking --against $(HTTPS_GIT)#branch=$(BASE_BRANCH)
-.PHONY: proto-check-breaking-ci
-
###############################################################################
### Build ABCI ###
###############################################################################
@@ -192,7 +205,7 @@ go.sum: go.mod
draw_deps:
@# requires brew install graphviz or apt-get install graphviz
- go get github.com/RobotsAndPencils/goviz
+ go install github.com/RobotsAndPencils/goviz@latest
@goviz -i ${REPO_NAME}/cmd/tendermint -d 3 | dot -Tpng -o dependency-graph.png
.PHONY: draw_deps
@@ -359,4 +372,4 @@ $(BUILDDIR)/packages.txt:$(GO_TEST_FILES) $(BUILDDIR)
split-test-packages:$(BUILDDIR)/packages.txt
split -d -n l/$(NUM_SPLIT) $< $<.
test-group-%:split-test-packages
- cat $(BUILDDIR)/packages.txt.$* | xargs go test -mod=readonly -timeout=15m -race -coverprofile=$(BUILDDIR)/$*.profile.out
+ cat $(BUILDDIR)/packages.txt.$* | xargs go test -mod=readonly -timeout=5m -race -coverprofile=$(BUILDDIR)/$*.profile.out
diff --git a/README.md b/README.md
index 711f4dc3d2..b3b166f0f4 100644
--- a/README.md
+++ b/README.md
@@ -3,7 +3,7 @@
![banner](docs/tendermint-core-image.jpg)
[Byzantine-Fault Tolerant](https://en.wikipedia.org/wiki/Byzantine_fault_tolerance)
-[State Machines](https://en.wikipedia.org/wiki/State_machine_replication).
+[State Machine Replication](https://en.wikipedia.org/wiki/State_machine_replication).
Or [Blockchain](), for short.
[![version](https://img.shields.io/github/tag/tendermint/tendermint.svg)](https://github.com/dashevo/tenderdash/releases/latest)
@@ -20,10 +20,14 @@ Or [Blockchain](), for shor
Tendermint Core is a Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine - written in any programming language - and securely replicates it on many machines.
-For protocol details, see [the specification](https://github.com/tendermint/spec).
+For protocol details, refer to the [Tendermint Specification](./spec/README.md).
For detailed analysis of the consensus protocol, including safety and liveness proofs,
-see our recent paper, "[The latest gossip on BFT consensus](https://arxiv.org/abs/1807.04938)".
+read our paper, "[The latest gossip on BFT consensus](https://arxiv.org/abs/1807.04938)".
+
+## Documentation
+
+Complete documentation can be found on the [website](https://docs.tendermint.com/).
## Releases
@@ -33,11 +37,14 @@ Tendermint has been in the production of private and public environments, most n
See below for more details about [versioning](#versioning).
In any case, if you intend to run Tendermint in production, we're happy to help. You can
-contact us [over email](mailto:hello@interchain.berlin) or [join the chat](https://discord.gg/cosmosnetwork).
+contact us [over email](mailto:hello@interchain.io) or [join the chat](https://discord.gg/cosmosnetwork).
+
+More on how releases are conducted can be found [here](./RELEASES.md).
## Security
-To report a security vulnerability, see our [bug bounty program](https://hackerone.com/cosmos).
+To report a security vulnerability, see our [bug bounty
+program](https://hackerone.com/cosmos).
For examples of the kinds of bugs we're looking for, see [our security policy](SECURITY.md).
We also maintain a dedicated mailing list for security updates. We will only ever use this mailing list
@@ -50,22 +57,17 @@ to notify you of vulnerabilities and fixes in Tendermint Core. You can subscribe
| Requirement | Notes |
|-------------|------------------|
-| Go version | Go1.16 or higher |
-
-## Documentation
-
-Complete documentation can be found on the [website](https://docs.tendermint.com/master/).
+| Go version | Go1.17 or higher |
### Install
-See the [install instructions](/docs/introduction/install.md).
+See the [install instructions](./docs/introduction/install.md).
### Quick Start
-- [Single node](/docs/introduction/quick-start.md)
-- [Local cluster using docker-compose](/docs/tools/docker-compose.md)
-- [Remote cluster using Terraform and Ansible](/docs/tools/terraform-and-ansible.md)
-- [Join the Cosmos testnet](https://cosmos.network/testnet)
+- [Single node](./docs/introduction/quick-start.md)
+- [Local cluster using docker-compose](./docs/tools/docker-compose.md)
+- [Remote cluster using Terraform and Ansible](./docs/tools/terraform-and-ansible.md)
## Contributing
@@ -73,9 +75,9 @@ Please abide by the [Code of Conduct](CODE_OF_CONDUCT.md) in all interactions.
Before contributing to the project, please take a look at the [contributing guidelines](CONTRIBUTING.md)
and the [style guide](STYLE_GUIDE.md). You may also find it helpful to read the
-[specifications](https://github.com/tendermint/spec), watch the [Developer Sessions](/docs/DEV_SESSIONS.md),
+[specifications](./spec/README.md),
and familiarize yourself with our
-[Architectural Decision Records](https://github.com/tendermint/tendermint/tree/master/docs/architecture).
+[Architectural Decision Records (ADRs)](./docs/architecture/README.md) and [Request For Comments (RFCs)](./docs/rfc/README.md).
## Versioning
@@ -111,24 +113,23 @@ in [UPGRADING.md](./UPGRADING.md).
## Resources
-### Tendermint Core
-
-For details about the blockchain data structures and the p2p protocols, see the
-[Tendermint specification](https://docs.tendermint.com/master/spec/).
+### Roadmap
-For details on using the software, see the [documentation](/docs/) which is also
-hosted at:
+We keep a public up-to-date version of our roadmap [here](./docs/roadmap/roadmap.md)
-### Tools
+### Libraries
-Benchmarking is provided by [`tm-load-test`](https://github.com/informalsystems/tm-load-test).
-Additional tooling can be found in [/docs/tools](/docs/tools).
+- [Cosmos SDK](http://github.com/cosmos/cosmos-sdk); A framework for building applications in Golang
+- [Tendermint in Rust](https://github.com/informalsystems/tendermint-rs)
+- [ABCI Tower](https://github.com/penumbra-zone/tower-abci)
### Applications
-- [Cosmos SDK](http://github.com/cosmos/cosmos-sdk); a cryptocurrency application framework
-- [Ethermint](http://github.com/cosmos/ethermint); Ethereum on Tendermint
-- [Many more](https://tendermint.com/ecosystem)
+- [Cosmos Hub](https://hub.cosmos.network/)
+- [Terra](https://www.terra.money/)
+- [Celestia](https://celestia.org/)
+- [Anoma](https://anoma.network/)
+- [Vocdoni](https://docs.vocdoni.io/)
### Research
@@ -144,7 +145,7 @@ Tenderdash is maintained by [Dash Core Group](https://www.dash.org/dcg/).
If you'd like to work full-time on Tenderdash, [see our Jobs page](https://www.dash.org/dcg/jobs/).
Tendermint Core is maintained by [Interchain GmbH](https://interchain.berlin).
-If you'd like to work full-time on Tendermint Core, [we're hiring](https://interchain-gmbh.breezy.hr/p/682fb7e8a6f601-software-engineer-tendermint-core)!
+If you'd like to work full-time on Tendermint Core, [we're hiring](https://interchain-gmbh.breezy.hr/)!
Funding for Tendermint Core development comes primarily from the [Interchain Foundation](https://interchain.io),
a Swiss non-profit. The Tendermint trademark is owned by [Tendermint Inc.](https://tendermint.com), the for-profit entity
diff --git a/RELEASES.md b/RELEASES.md
new file mode 100644
index 0000000000..f3bfd20d5c
--- /dev/null
+++ b/RELEASES.md
@@ -0,0 +1,207 @@
+# Releases
+
+Tendermint uses [semantic versioning](https://semver.org/) with each release following
+a `vX.Y.Z` format. The `master` branch is used for active development and thus it's
+advisable not to build against it.
+
+The latest changes are always initially merged into `master`.
+Releases are specified using tags and are built from long-lived "backport" branches
+that are cut from `master` when the release process begins.
+Each release "line" (e.g. 0.34 or 0.33) has its own long-lived backport branch,
+and the backport branches have names like `v0.34.x` or `v0.33.x`
+(literally, `x`; it is not a placeholder in this case). Tendermint only
+maintains the last two releases at a time (the oldest release is predominantly
+just security patches).
+
+## Backporting
+
+As non-breaking changes land on `master`, they should also be backported
+to these backport branches.
+
+We use Mergify's [backport feature](https://mergify.io/features/backports) to automatically backport
+to the needed branch. There should be a label for any backport branch that you'll be targeting.
+To notify the bot to backport a pull request, mark the pull request with the label corresponding
+to the correct backport branch. For example, to backport to v0.35.x, add the label `S:backport-to-v0.35.x`.
+Once the original pull request is merged, the bot will try to cherry-pick the pull request
+to the backport branch. If the bot fails to backport, it will open a pull request.
+The author of the original pull request is responsible for solving the conflicts and
+merging the pull request.
+
+### Creating a backport branch
+
+If this is the first release candidate for a major release, you get to have the
+honor of creating the backport branch!
+
+Note that, after creating the backport branch, you'll also need to update the
+tags on `master` so that `go mod` is able to order the branches correctly. You
+should tag `master` with a "dev" tag that is "greater than" the backport
+branches tags. See [#6072](https://github.com/tendermint/tendermint/pull/6072)
+for more context.
+
+In the following example, we'll assume that we're making a backport branch for
+the 0.35.x line.
+
+1. Start on `master`
+
+2. Create and push the backport branch:
+ ```sh
+ git checkout -b v0.35.x
+ git push origin v0.35.x
+ ```
+
+3. Create a PR to update the documentation directory for the backport branch.
+
+ We only maintain RFC and ADR documents on master, to avoid confusion.
+ In addition, we rewrite Markdown URLs pointing to master to point to the
+ backport branch, so that generated documentation will link to the correct
+ versions of files elsewhere in the repository. For context on the latter,
+ see https://github.com/tendermint/tendermint/issues/7675.
+
+ To prepare the PR:
+ ```sh
+ # Remove the RFC and ADR documents from the backport.
+ # We only maintain these on master to avoid confusion.
+ git rm -r docs/rfc docs/architecture
+
+ # Update absolute links to point to the backport.
+ go run ./scripts/linkpatch -recur -target v0.35.x -skip-path docs/DOCS_README.md,docs/README.md docs
+
+ # Create and push the PR.
+ git checkout -b update-docs-v035x
+ git commit -m "Update docs for v0.35.x backport branch." docs
+ git push -u origin update-docs-v035x
+ ```
+
+ Be sure to merge this PR before making other changes on the newly-created
+ backport branch.
+
+After doing these steps, go back to `master` and do the following:
+
+1. Tag `master` as the dev branch for the _next_ major release and push it up to GitHub.
+ For example:
+ ```sh
+ git tag -a v0.36.0-dev -m "Development base for Tendermint v0.36."
+ git push origin v0.36.0-dev
+ ```
+
+2. Create a new workflow to run e2e nightlies for the new backport branch.
+ (See [e2e-nightly-master.yml][e2e] for an example.)
+
+3. Add a new section to the Mergify config (`.github/mergify.yml`) to enable the
+ backport bot to work on this branch, and add a corresponding `S:backport-to-v0.35.x`
+ [label](https://github.com/tendermint/tendermint/labels) so the bot can be triggered.
+
+4. Add a new section to the Dependabot config (`.github/dependabot.yml`) to
+ enable automatic update of Go dependencies on this branch. Copy and edit one
+ of the existing branch configurations to set the correct `target-branch`.
+
+[e2e]: https://github.com/tendermint/tendermint/blob/master/.github/workflows/e2e-nightly-master.yml
+
+## Release candidates
+
+Before creating an official release, especially a major release, we may want to create a
+release candidate (RC) for our friends and partners to test out. We use git tags to
+create RCs, and we build them off of backport branches.
+
+Tags for RCs should follow the "standard" release naming conventions, with `-rcX` at the end
+(for example, `v0.35.0-rc0`).
+
+(Note that branches and tags _cannot_ have the same names, so it's important that these branches
+have distinct names from the tags/release names.)
+
+If this is the first RC for a major release, you'll have to make a new backport branch (see above).
+Otherwise:
+
+1. Start from the backport branch (e.g. `v0.35.x`).
+2. Run the integration tests and the e2e nightlies
+ (which can be triggered from the Github UI;
+ e.g., https://github.com/tendermint/tendermint/actions/workflows/e2e-nightly-34x.yml).
+3. Prepare the changelog:
+ - Move the changes included in `CHANGELOG_PENDING.md` into `CHANGELOG.md`. Each RC should have
+ it's own changelog section. These will be squashed when the final candidate is released.
+ - Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
+ all PRs
+ - Ensure that `UPGRADING.md` is up-to-date and includes notes on any breaking changes
+ or other upgrading flows.
+ - Bump TMVersionDefault version in `version.go`
+ - Bump P2P and block protocol versions in `version.go`, if necessary.
+ Check the changelog for breaking changes in these components.
+ - Bump ABCI protocol version in `version.go`, if necessary
+4. Open a PR with these changes against the backport branch.
+5. Once these changes have landed on the backport branch, be sure to pull them back down locally.
+6. Once you have the changes locally, create the new tag, specifying a name and a tag "message":
+ `git tag -a v0.35.0-rc0 -m "Release Candidate v0.35.0-rc0`
+7. Push the tag back up to origin:
+ `git push origin v0.35.0-rc0`
+ Now the tag should be available on the repo's releases page.
+8. Future RCs will continue to be built off of this branch.
+
+Note that this process should only be used for "true" RCs--
+release candidates that, if successful, will be the next release.
+For more experimental "RCs," create a new, short-lived branch and tag that instead.
+
+## Major release
+
+This major release process assumes that this release was preceded by release candidates.
+If there were no release candidates, begin by creating a backport branch, as described above.
+
+1. Start on the backport branch (e.g. `v0.35.x`)
+2. Run integration tests (`make test_integrations`) and the e2e nightlies.
+3. Prepare the release:
+ - "Squash" changes from the changelog entries for the RCs into a single entry,
+ and add all changes included in `CHANGELOG_PENDING.md`.
+ (Squashing includes both combining all entries, as well as removing or simplifying
+ any intra-RC changes. It may also help to alphabetize the entries by package name.)
+ - Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
+ all PRs
+ - Ensure that `UPGRADING.md` is up-to-date and includes notes on any breaking changes
+ or other upgrading flows.
+ - Bump TMVersionDefault version in `version.go`
+ - Bump P2P and block protocol versions in `version.go`, if necessary
+ - Bump ABCI protocol version in `version.go`, if necessary
+4. Open a PR with these changes against the backport branch.
+5. Once these changes are on the backport branch, push a tag with prepared release details.
+ This will trigger the actual release `v0.35.0`.
+ - `git tag -a v0.35.0 -m 'Release v0.35.0'`
+ - `git push origin v0.35.0`
+6. Make sure that `master` is updated with the latest `CHANGELOG.md`, `CHANGELOG_PENDING.md`, and `UPGRADING.md`.
+7. Add the release to the documentation site generator config (see
+ [DOCS_README.md](./docs/DOCS_README.md) for more details). In summary:
+ - Start on branch `master`.
+ - Add a new line at the bottom of [`docs/versions`](./docs/versions) to
+ ensure the newest release is the default for the landing page.
+ - Add a new entry to `themeConfig.versions` in
+ [`docs/.vuepress/config.js`](./docs/.vuepress/config.js) to include the
+ release in the dropdown versions menu.
+ - Commit these changes to `master` and backport them into the backport
+ branch for this release.
+
+## Minor release (point releases)
+
+Minor releases are done differently from major releases: They are built off of
+long-lived backport branches, rather than from master. As non-breaking changes
+land on `master`, they should also be backported into these backport branches.
+
+Minor releases don't have release candidates by default, although any tricky
+changes may merit a release candidate.
+
+To create a minor release:
+
+1. Checkout the long-lived backport branch: `git checkout v0.35.x`
+2. Run integration tests (`make test_integrations`) and the nightlies.
+3. Check out a new branch and prepare the release:
+ - Copy `CHANGELOG_PENDING.md` to top of `CHANGELOG.md`
+ - Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for all issues
+ - Run `bash ./scripts/authors.sh` to get a list of authors since the latest release, and add the GitHub aliases of external contributors to the top of the CHANGELOG. To lookup an alias from an email, try `bash ./scripts/authors.sh `
+ - Reset the `CHANGELOG_PENDING.md`
+ - Bump the TMDefaultVersion in `version.go`
+ - Bump the ABCI version number, if necessary.
+ (Note that ABCI follows semver, and that ABCI versions are the only versions
+ which can change during minor releases, and only field additions are valid minor changes.)
+4. Open a PR with these changes that will land them back on `v0.35.x`
+5. Once this change has landed on the backport branch, make sure to pull it locally, then push a tag.
+ - `git tag -a v0.35.1 -m 'Release v0.35.1'`
+ - `git push origin v0.35.1`
+6. Create a pull request back to master with the CHANGELOG & version changes from the latest release.
+ - Remove all `R:minor` labels from the pull requests that were included in the release.
+ - Do not merge the backport branch into master.
diff --git a/SECURITY.md b/SECURITY.md
index 57d13e565a..133e993c41 100644
--- a/SECURITY.md
+++ b/SECURITY.md
@@ -4,7 +4,7 @@
As part of our [Coordinated Vulnerability Disclosure
Policy](https://tendermint.com/security), we operate a [bug
-bounty](https://hackerone.com/tendermint).
+bounty](https://hackerone.com/cosmos).
See the policy for more details on submissions and rewards, and see "Example Vulnerabilities" (below) for examples of the kinds of bugs we're most interested in.
### Guidelines
@@ -86,7 +86,7 @@ If you are running older versions of Tendermint Core, we encourage you to upgrad
## Scope
-The full scope of our bug bounty program is outlined on our [Hacker One program page](https://hackerone.com/tendermint). Please also note that, in the interest of the safety of our users and staff, a few things are explicitly excluded from scope:
+The full scope of our bug bounty program is outlined on our [Hacker One program page](https://hackerone.com/cosmos). Please also note that, in the interest of the safety of our users and staff, a few things are explicitly excluded from scope:
* Any third-party services
* Findings from physical testing, such as office access
diff --git a/UPGRADING.md b/UPGRADING.md
index 8972ca6beb..28e44e58c0 100644
--- a/UPGRADING.md
+++ b/UPGRADING.md
@@ -2,6 +2,170 @@
This guide provides instructions for upgrading to specific versions of Tendermint Core.
+## v0.36
+
+### ABCI Changes
+
+#### ABCI++
+
+Coming soon...
+
+#### ABCI Mutex
+
+In previous versions of ABCI, Tendermint was prevented from making
+concurrent calls to ABCI implementations by virtue of mutexes in the
+implementation of Tendermint's ABCI infrastructure. These mutexes have
+been removed from the current implementation and applications will now
+be responsible for managing their own concurrency control.
+
+To replicate the prior semantics, ensure that ABCI applications have a
+single mutex that protects all ABCI method calls from concurrent
+access. You can relax these requirements if your application can
+provide safe concurrent access via other means. This safety is an
+application concern so be very sure to test the application thoroughly
+using realistic workloads and the race detector to ensure your
+applications remains correct.
+
+### Config Changes
+
+- We have added a new, experimental tool to help operators migrate
+ configuration files created by previous versions of Tendermint.
+ To try this tool, run:
+
+ ```shell
+ # Install the tool.
+ go install github.com/tendermint/tendermint/scripts/confix@latest
+
+ # Run the tool with the old configuration file as input.
+ # Replace the -config argument with your path.
+ confix -config ~/.tendermint/config/config.toml -out updated.toml
+ ```
+
+ This tool should be able to update configurations from v0.34 and v0.35. We
+ plan to extend it to handle older configuration files in the future. For now,
+ it will report an error (without making any changes) if it does not recognize
+ the version that created the file.
+
+- The default configuration for a newly-created node now disables indexing for
+ ABCI event metadata. Existing node configurations that already have indexing
+ turned on are not affected. Operators who wish to enable indexing for a new
+ node, however, must now edit the `config.toml` explicitly.
+
+### RPC Changes
+
+Tendermint v0.36 adds a new RPC event subscription API. The existing event
+subscription API based on websockets is now deprecated. It will continue to
+work throughout the v0.36 release, but the `subscribe`, `unsubscribe`, and
+`unsubscribe_all` methods, along with websocket support, will be removed in
+Tendermint v0.37. Callers currently using these features should migrate as
+soon as is practical to the new API.
+
+To enable the new API, node operators set a new `event-log-window-size`
+parameter in the `[rpc]` section of the `config.toml` file. This defines a
+duration of time during which the node will log all events published to the
+event bus for use by RPC consumers.
+
+Consumers use the new `events` JSON-RPC method to poll for events matching
+their query in the log. Unlike the streaming API, events are not discarded if
+the caller is slow, loses its connection, or crashes. As long as the client
+recovers before its events expire from the log window, it will be able to
+replay and catch up after recovering. Also unlike the streaming API, the client
+can tell if it has truly missed events because they have expired from the log.
+
+The `events` method is a normal JSON-RPC method, and does not require any
+non-standard response processing (in contrast with the old `subscribe`).
+Clients can modify their query at any time, and no longer need to coordinate
+subscribe and unsubscribe calls to handle multiple queries.
+
+The Go client implementations in the Tendermint Core repository have all been
+updated to add a new `Events` method, including the light client proxy.
+
+A new `rpc/client/eventstream` package has also been added to make it easier
+for users to update existing use of the streaming API to use the polling API
+The `eventstream` package handles polling and delivers matching events to a
+callback.
+
+For more detailed information, see [ADR 075](https://tinyurl.com/adr075) which
+defines and describes the new API in detail.
+
+### Timeout Parameter Changes
+
+Tendermint v0.36 updates how the Tendermint consensus timing parameters are
+configured. These parameters, `timeout-propose`, `timeout-propose-delta`,
+`timeout-prevote`, `timeout-prevote-delta`, `timeout-precommit`,
+`timeout-precommit-delta`, `timeout-commit`, and `skip-timeout-commit`, were
+previously configured in `config.toml`. These timing parameters have moved and
+are no longer configured in the `config.toml` file. These parameters have been
+migrated into the `ConsensusParameters`. Nodes with these parameters set in the
+local configuration file will see a warning logged on startup indicating that
+these parameters are no longer used.
+
+These parameters have also been pared-down. There are no longer separate
+parameters for both the `prevote` and `precommit` phases of Tendermint. The
+separate `timeout-prevote` and `timeout-precommit` parameters have been merged
+into a single `timeout-vote` parameter that configures both of these similar
+phases of the consensus protocol.
+
+A set of reasonable defaults have been put in place for these new parameters
+that will take effect when the node starts up in version v0.36. New chains
+created using v0.36 and beyond will be able to configure these parameters in the
+chain's `genesis.json` file. Chains that upgrade to v0.36 from a previous
+compatible version of Tendermint will begin running with the default values.
+Upgrading applications that wish to use different values from the defaults for
+these parameters may do so by setting the `ConsensusParams.Timeout` field of the
+`FinalizeBlock` `ABCI` response.
+
+As a safety measure in case of unusual timing issues during the upgrade to
+v0.36, an operator may override the consensus timeout values for a single node.
+Note, however, that these overrides will be removed in Tendermint v0.37. See
+[configuration](https://github.com/tendermint/tendermint/blob/master/docs/nodes/configuration.md)
+for more information about these overrides.
+
+For more discussion of this, see [ADR 074](https://tinyurl.com/adr074), which
+lays out the reasoning for the changes as well as [RFC
+009](https://tinyurl.com/rfc009) for a discussion of the complexities of
+upgrading consensus parameters.
+
+### CLI Changes
+
+The functionality around resetting a node has been extended to make it safer. The
+`unsafe-reset-all` command has been replaced by a `reset` command with four
+subcommands: `blockchain`, `peers`, `unsafe-signer` and `unsafe-all`.
+
+- `tendermint reset blockchain`: Clears a node of all blocks, consensus state, evidence,
+ and indexed transactions. NOTE: This command does not reset application state.
+ If you need to rollback the last application state (to recover from application
+ nondeterminism), see instead the `tendermint rollback` command.
+- `tendermint reset peers`: Clears the peer store, which persists information on peers used
+ by the networking layer. This can be used to get rid of stale addresses or to switch
+ to a predefined set of static peers.
+- `tendermint reset unsafe-signer`: Resets the watermark level of the PrivVal File signer
+ allowing it to sign votes from the genesis height. This should only be used in testing as
+ it can lead to the node double signing.
+- `tendermint reset unsafe-all`: A summation of the other three commands. This will delete
+ the entire `data` directory which may include application data as well.
+
+### Go API Changes
+
+#### `crypto` Package Cleanup
+
+The `github.com/tendermint/tendermint/crypto/tmhash` package was removed
+to improve clarity. Users are encouraged to use the standard library
+`crypto/sha256` package directly. However, as a convenience, some constants
+and one function have moved to the Tendermint `crypto` package:
+
+- The `crypto.Checksum` function returns the sha256 checksum of a
+ byteslice. This is a wrapper around `sha256.Sum265` from the
+ standard libary, but provided as a function to ease type
+ requirements (the library function returns a `[32]byte` rather than
+ a `[]byte`).
+- `tmhash.TruncatedSize` is now `crypto.AddressSize` which was
+ previously an alias for the same value.
+- `tmhash.Size` and `tmhash.BlockSize` are now `crypto.HashSize` and
+ `crypto.HashSize`.
+- `tmhash.SumTruncated` is now available via `crypto.AddressHash` or by
+ `crypto.Checksum(<...>)[:crypto.AddressSize]`
+
## v0.35
### ABCI Changes
@@ -116,11 +280,13 @@ the full RPC interface provided as direct function calls. Import the
the node service as in the following:
```go
- node := node.NewDefault() //construct the node object
- // start and set up the node service
+logger := log.NewNopLogger()
+
+// Construct and start up a node with default settings.
+node := node.NewDefault(logger)
- client := local.New(node.(local.NodeService))
- // use client object to interact with the node
+// Construct a local (in-memory) RPC client to the node.
+client := local.New(logger, node.(local.NodeService))
```
### gRPC Support
@@ -217,7 +383,7 @@ Note also that Tendermint 0.34 also requires Go 1.16 or higher.
were added to support the new State Sync feature.
Previously, syncing a new node to a preexisting network could take days; but with State Sync,
new nodes are able to join a network in a matter of seconds.
- Read [the spec](https://docs.tendermint.com/master/spec/abci/apps.html#state-sync)
+ Read [the spec](https://github.com/tendermint/tendermint/blob/master/spec/abci/apps.md)
if you want to learn more about State Sync, or if you'd like your application to use it.
(If you don't want to support State Sync in your application, you can just implement these new
ABCI methods as no-ops, leaving them empty.)
@@ -342,7 +508,6 @@ The `bech32` package has moved to the Cosmos SDK:
### CLI
The `tendermint lite` command has been renamed to `tendermint light` and has a slightly different API.
-See [the docs](https://docs.tendermint.com/master/tendermint-core/light-client-protocol.html#http-proxy) for details.
### Light Client
@@ -617,7 +782,7 @@ the compilation tag:
Use `cleveldb` tag instead of `gcc` to compile Tendermint with CLevelDB or
use `make build_c` / `make install_c` (full instructions can be found at
-)
+ 0 {
msg = args[0]
}
- res, err := client.EchoSync(ctx, msg)
+ res, err := client.Echo(cmd.Context(), msg)
if err != nil {
return err
}
+
printResponse(cmd, args, response{
Data: []byte(res.Message),
})
+
return nil
}
@@ -465,7 +490,7 @@ func cmdInfo(cmd *cobra.Command, args []string) error {
if len(args) == 1 {
version = args[0]
}
- res, err := client.InfoSync(ctx, types.RequestInfo{Version: version})
+ res, err := client.Info(cmd.Context(), &types.RequestInfo{Version: version})
if err != nil {
return err
}
@@ -478,28 +503,34 @@ func cmdInfo(cmd *cobra.Command, args []string) error {
const codeBad uint32 = 10
// Append a new tx to application
-func cmdDeliverTx(cmd *cobra.Command, args []string) error {
+func cmdFinalizeBlock(cmd *cobra.Command, args []string) error {
if len(args) == 0 {
printResponse(cmd, args, response{
Code: codeBad,
- Log: "want the tx",
+ Log: "Must provide at least one transaction",
})
return nil
}
- txBytes, err := stringOrHexToBytes(args[0])
- if err != nil {
- return err
+ txs := make([][]byte, len(args))
+ for i, arg := range args {
+ txBytes, err := stringOrHexToBytes(arg)
+ if err != nil {
+ return err
+ }
+ txs[i] = txBytes
}
- res, err := client.DeliverTxSync(ctx, types.RequestDeliverTx{Tx: txBytes})
+ res, err := client.FinalizeBlock(cmd.Context(), &types.RequestFinalizeBlock{Txs: txs})
if err != nil {
return err
}
- printResponse(cmd, args, response{
- Code: res.Code,
- Data: res.Data,
- Info: res.Info,
- Log: res.Log,
- })
+ for _, tx := range res.TxResults {
+ printResponse(cmd, args, response{
+ Code: tx.Code,
+ Data: tx.Data,
+ Info: tx.Info,
+ Log: tx.Log,
+ })
+ }
return nil
}
@@ -516,7 +547,7 @@ func cmdCheckTx(cmd *cobra.Command, args []string) error {
if err != nil {
return err
}
- res, err := client.CheckTxSync(ctx, types.RequestCheckTx{Tx: txBytes})
+ res, err := client.CheckTx(cmd.Context(), &types.RequestCheckTx{Tx: txBytes})
if err != nil {
return err
}
@@ -531,7 +562,7 @@ func cmdCheckTx(cmd *cobra.Command, args []string) error {
// Get application Merkle root hash
func cmdCommit(cmd *cobra.Command, args []string) error {
- res, err := client.CommitSync(ctx)
+ res, err := client.Commit(cmd.Context())
if err != nil {
return err
}
@@ -556,7 +587,7 @@ func cmdQuery(cmd *cobra.Command, args []string) error {
return err
}
- resQuery, err := client.QuerySync(ctx, types.RequestQuery{
+ resQuery, err := client.Query(cmd.Context(), &types.RequestQuery{
Data: queryBytes,
Path: flagPath,
Height: int64(flagHeight),
@@ -579,38 +610,34 @@ func cmdQuery(cmd *cobra.Command, args []string) error {
return nil
}
-func cmdKVStore(cmd *cobra.Command, args []string) error {
- logger := log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo, false)
+func makeKVStoreCmd(logger log.Logger) func(*cobra.Command, []string) error {
+ return func(cmd *cobra.Command, args []string) error {
+ // Create the application - in memory or persisted to disk
+ var app types.Application
+ if flagPersist == "" {
+ app = kvstore.NewApplication()
+ } else {
+ app = kvstore.NewPersistentKVStoreApplication(logger, flagPersist)
+ }
- // Create the application - in memory or persisted to disk
- var app types.Application
- if flagPersist == "" {
- app = kvstore.NewApplication()
- } else {
- app = kvstore.NewPersistentKVStoreApplication(flagPersist)
- app.(*kvstore.PersistentKVStoreApplication).SetLogger(logger.With("module", "kvstore"))
- }
+ // Start the listener
+ srv, err := server.NewServer(logger.With("module", "abci-server"), flagAddress, flagAbci, app)
+ if err != nil {
+ return err
+ }
- // Start the listener
- srv, err := server.NewServer(flagAddress, flagAbci, app)
- if err != nil {
- return err
- }
- srv.SetLogger(logger.With("module", "abci-server"))
- if err := srv.Start(); err != nil {
- return err
- }
+ ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM)
+ defer cancel()
- // Stop upon receiving SIGTERM or CTRL-C.
- tmos.TrapSignal(logger, func() {
- // Cleanup
- if err := srv.Stop(); err != nil {
- logger.Error("Error while stopping server", "err", err)
+ if err := srv.Start(ctx); err != nil {
+ return err
}
- })
- // Run forever.
- select {}
+ // Run forever.
+ <-ctx.Done()
+ return nil
+ }
+
}
//--------------------------------------------------------------------------------
@@ -618,7 +645,7 @@ func cmdKVStore(cmd *cobra.Command, args []string) error {
func printResponse(cmd *cobra.Command, args []string, rsp response) {
if flagVerbose {
- fmt.Println(">", cmd.Use, strings.Join(args, " "))
+ fmt.Println(">", strings.Join(append([]string{cmd.Use}, args...), " "))
}
// Always print the status code.
diff --git a/abci/example/counter/counter.go b/abci/example/counter/counter.go
index e1675eaf8e..4bb6f5b404 100644
--- a/abci/example/counter/counter.go
+++ b/abci/example/counter/counter.go
@@ -1,6 +1,7 @@
package counter
import (
+ "context"
"encoding/binary"
"fmt"
@@ -25,80 +26,82 @@ func NewApplication(serial bool) *Application {
return &Application{serial: serial, CoreChainLockStep: 1}
}
-func (app *Application) Info(req types.RequestInfo) types.ResponseInfo {
- return types.ResponseInfo{Data: fmt.Sprintf("{\"hashes\":%v,\"txs\":%v}", app.hashCount, app.txCount)}
+func (app *Application) Info(_ context.Context, _ *types.RequestInfo) (*types.ResponseInfo, error) {
+ return &types.ResponseInfo{Data: fmt.Sprintf("{\"hashes\":%v,\"txs\":%v}", app.hashCount, app.txCount)}, nil
}
-func (app *Application) DeliverTx(req types.RequestDeliverTx) types.ResponseDeliverTx {
+func (app *Application) CheckTx(_ context.Context, req *types.RequestCheckTx) (*types.ResponseCheckTx, error) {
if app.serial {
if len(req.Tx) > 8 {
- return types.ResponseDeliverTx{
+ return &types.ResponseCheckTx{
Code: code.CodeTypeEncodingError,
- Log: fmt.Sprintf("Max tx size is 8 bytes, got %d", len(req.Tx))}
- }
- tx8 := make([]byte, 8)
- copy(tx8[len(tx8)-len(req.Tx):], req.Tx)
- txValue := binary.BigEndian.Uint64(tx8)
- if txValue != uint64(app.txCount) {
- return types.ResponseDeliverTx{
- Code: code.CodeTypeBadNonce,
- Log: fmt.Sprintf("Invalid nonce. Expected %v, got %v", app.txCount, txValue)}
- }
- }
- app.txCount++
- return types.ResponseDeliverTx{Code: code.CodeTypeOK}
-}
-
-func (app *Application) CheckTx(req types.RequestCheckTx) types.ResponseCheckTx {
- if app.serial {
- if len(req.Tx) > 8 {
- return types.ResponseCheckTx{
- Code: code.CodeTypeEncodingError,
- Log: fmt.Sprintf("Max tx size is 8 bytes, got %d", len(req.Tx))}
+ Log: fmt.Sprintf("Max tx size is 8 bytes, got %d", len(req.Tx)),
+ }, nil
}
tx8 := make([]byte, 8)
copy(tx8[len(tx8)-len(req.Tx):], req.Tx)
txValue := binary.BigEndian.Uint64(tx8)
if txValue < uint64(app.txCount) {
- return types.ResponseCheckTx{
+ return &types.ResponseCheckTx{
Code: code.CodeTypeBadNonce,
- Log: fmt.Sprintf("Invalid nonce. Expected >= %v, got %v", app.txCount, txValue)}
+ Log: fmt.Sprintf("Invalid nonce. Expected >= %v, got %v", app.txCount, txValue),
+ }, nil
}
}
- return types.ResponseCheckTx{Code: code.CodeTypeOK}
+ return &types.ResponseCheckTx{Code: code.CodeTypeOK}, nil
}
-func (app *Application) Commit() (resp types.ResponseCommit) {
+func (app *Application) Commit(_ context.Context) (*types.ResponseCommit, error) {
app.hashCount++
if app.txCount == 0 {
- return types.ResponseCommit{}
+ return &types.ResponseCommit{}, nil
}
hash := make([]byte, 24)
endHash := make([]byte, 8)
binary.BigEndian.PutUint64(endHash, uint64(app.txCount))
hash = append(hash, endHash...)
- return types.ResponseCommit{Data: hash}
+ return &types.ResponseCommit{Data: hash}, nil
}
-func (app *Application) Query(reqQuery types.RequestQuery) types.ResponseQuery {
+func (app *Application) Query(_ context.Context, reqQuery *types.RequestQuery) (*types.ResponseQuery, error) {
switch reqQuery.Path {
case "verify-chainlock":
- return types.ResponseQuery{Code: 0}
+ return &types.ResponseQuery{Code: 0}, nil
case "hash":
- return types.ResponseQuery{Value: []byte(fmt.Sprintf("%v", app.hashCount))}
+ return &types.ResponseQuery{Value: []byte(fmt.Sprintf("%v", app.hashCount))}, nil
case "tx":
- return types.ResponseQuery{Value: []byte(fmt.Sprintf("%v", app.txCount))}
+ return &types.ResponseQuery{Value: []byte(fmt.Sprintf("%v", app.txCount))}, nil
default:
- return types.ResponseQuery{Log: fmt.Sprintf("Invalid query path. Expected hash or tx, got %v", reqQuery.Path)}
+ return &types.ResponseQuery{Log: fmt.Sprintf("Invalid query path. Expected hash or tx, got %v", reqQuery.Path)}, nil
}
}
-func (app *Application) EndBlock(reqEndBlock types.RequestEndBlock) types.ResponseEndBlock {
- var resp types.ResponseEndBlock
+func (app *Application) FinalizeBlock(_ context.Context, req *types.RequestFinalizeBlock) (*types.ResponseFinalizeBlock, error) {
+ var resp types.ResponseFinalizeBlock
+ for _, tx := range req.Txs {
+ if app.serial {
+ if len(tx) > 8 {
+ resp.TxResults = append(resp.TxResults, &types.ExecTxResult{
+ Code: code.CodeTypeEncodingError,
+ Log: fmt.Sprintf("Max tx size is 8 bytes, got %d", len(tx)),
+ })
+ }
+ tx8 := make([]byte, 8)
+ copy(tx8[len(tx8)-len(tx):], tx)
+ txValue := binary.BigEndian.Uint64(tx8)
+ if txValue != uint64(app.txCount) {
+ resp.TxResults = append(resp.TxResults, &types.ExecTxResult{
+ Code: code.CodeTypeBadNonce,
+ Log: fmt.Sprintf("Invalid nonce. Expected %v, got %v", app.txCount, txValue),
+ })
+ }
+ }
+ app.txCount++
+ }
if app.HasCoreChainLocks {
app.CurrentCoreChainLockHeight = app.CurrentCoreChainLockHeight + uint32(app.CoreChainLockStep)
coreChainLock := tmtypes.NewMockChainLock(app.CurrentCoreChainLockHeight)
resp.NextCoreChainLockUpdate = coreChainLock.ToProto()
}
- return resp
+ return &resp, nil
}
diff --git a/abci/example/example_test.go b/abci/example/example_test.go
index 8b8691e371..066d4071d7 100644
--- a/abci/example/example_test.go
+++ b/abci/example/example_test.go
@@ -6,7 +6,6 @@ import (
"math/rand"
"net"
"os"
- "reflect"
"testing"
"time"
@@ -30,95 +29,69 @@ func init() {
}
func TestKVStore(t *testing.T) {
- fmt.Println("### Testing KVStore")
- testStream(t, kvstore.NewApplication())
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+ logger := log.NewNopLogger()
+
+ t.Log("### Testing KVStore")
+ testBulk(ctx, t, logger, kvstore.NewApplication())
}
func TestBaseApp(t *testing.T) {
- fmt.Println("### Testing BaseApp")
- testStream(t, types.NewBaseApplication())
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+ logger := log.NewNopLogger()
+
+ t.Log("### Testing BaseApp")
+ testBulk(ctx, t, logger, types.NewBaseApplication())
}
func TestGRPC(t *testing.T) {
- fmt.Println("### Testing GRPC")
- testGRPCSync(t, types.NewGRPCApplication(types.NewBaseApplication()))
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+
+ logger := log.NewNopLogger()
+
+ t.Log("### Testing GRPC")
+ testGRPCSync(ctx, t, logger, types.NewBaseApplication())
}
-func testStream(t *testing.T, app types.Application) {
- const numDeliverTxs = 20000
+func testBulk(ctx context.Context, t *testing.T, logger log.Logger, app types.Application) {
+ t.Helper()
+
+ const numDeliverTxs = 700000
socketFile := fmt.Sprintf("test-%08x.sock", rand.Int31n(1<<30))
defer os.Remove(socketFile)
socket := fmt.Sprintf("unix://%v", socketFile)
-
// Start the listener
- server := abciserver.NewSocketServer(socket, app)
- server.SetLogger(log.TestingLogger().With("module", "abci-server"))
- err := server.Start()
+ server := abciserver.NewSocketServer(logger.With("module", "abci-server"), socket, app)
+ t.Cleanup(server.Wait)
+ err := server.Start(ctx)
require.NoError(t, err)
- t.Cleanup(func() {
- if err := server.Stop(); err != nil {
- t.Error(err)
- }
- })
// Connect to the socket
- client := abciclient.NewSocketClient(socket, false)
- client.SetLogger(log.TestingLogger().With("module", "abci-client"))
- err = client.Start()
- require.NoError(t, err)
- t.Cleanup(func() {
- if err := client.Stop(); err != nil {
- t.Error(err)
- }
- })
-
- done := make(chan struct{})
- counter := 0
- client.SetResponseCallback(func(req *types.Request, res *types.Response) {
- // Process response
- switch r := res.Value.(type) {
- case *types.Response_DeliverTx:
- counter++
- if r.DeliverTx.Code != code.CodeTypeOK {
- t.Error("DeliverTx failed with ret_code", r.DeliverTx.Code)
- }
- if counter > numDeliverTxs {
- t.Fatalf("Too many DeliverTx responses. Got %d, expected %d", counter, numDeliverTxs)
- }
- if counter == numDeliverTxs {
- go func() {
- time.Sleep(time.Second * 1) // Wait for a bit to allow counter overflow
- close(done)
- }()
- return
- }
- case *types.Response_Flush:
- // ignore
- default:
- t.Error("Unexpected response type", reflect.TypeOf(res.Value))
- }
- })
+ client := abciclient.NewSocketClient(logger.With("module", "abci-client"), socket, false)
+ t.Cleanup(client.Wait)
- ctx := context.Background()
+ err = client.Start(ctx)
+ require.NoError(t, err)
- // Write requests
+ // Construct request
+ rfb := &types.RequestFinalizeBlock{Txs: make([][]byte, numDeliverTxs)}
for counter := 0; counter < numDeliverTxs; counter++ {
- // Send request
- _, err = client.DeliverTxAsync(ctx, types.RequestDeliverTx{Tx: []byte("test")})
- require.NoError(t, err)
-
- // Sometimes send flush messages
- if counter%128 == 0 {
- err = client.FlushSync(context.Background())
- require.NoError(t, err)
- }
+ rfb.Txs[counter] = []byte("test")
+ }
+ // Send bulk request
+ res, err := client.FinalizeBlock(ctx, rfb)
+ require.NoError(t, err)
+ require.Equal(t, numDeliverTxs, len(res.TxResults), "Number of txs doesn't match")
+ for _, tx := range res.TxResults {
+ require.Equal(t, tx.Code, code.CodeTypeOK, "Tx failed")
}
// Send final flush message
- _, err = client.FlushAsync(ctx)
+ err = client.Flush(ctx)
require.NoError(t, err)
-
- <-done
}
//-------------------------
@@ -128,33 +101,25 @@ func dialerFunc(ctx context.Context, addr string) (net.Conn, error) {
return tmnet.Connect(addr)
}
-func testGRPCSync(t *testing.T, app types.ABCIApplicationServer) {
- numDeliverTxs := 2000
+func testGRPCSync(ctx context.Context, t *testing.T, logger log.Logger, app types.Application) {
+ t.Helper()
+ numDeliverTxs := 680000
socketFile := fmt.Sprintf("/tmp/test-%08x.sock", rand.Int31n(1<<30))
defer os.Remove(socketFile)
socket := fmt.Sprintf("unix://%v", socketFile)
// Start the listener
- server := abciserver.NewGRPCServer(socket, app)
- server.SetLogger(log.TestingLogger().With("module", "abci-server"))
- if err := server.Start(); err != nil {
- t.Fatalf("Error starting GRPC server: %v", err.Error())
- }
+ server := abciserver.NewGRPCServer(logger.With("module", "abci-server"), socket, app)
- t.Cleanup(func() {
- if err := server.Stop(); err != nil {
- t.Error(err)
- }
- })
+ require.NoError(t, server.Start(ctx))
+ t.Cleanup(server.Wait)
// Connect to the socket
conn, err := grpc.Dial(socket,
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithContextDialer(dialerFunc),
)
- if err != nil {
- t.Fatalf("Error dialing GRPC server: %v", err.Error())
- }
+ require.NoError(t, err, "Error dialing GRPC server")
t.Cleanup(func() {
if err := conn.Close(); err != nil {
@@ -164,26 +129,17 @@ func testGRPCSync(t *testing.T, app types.ABCIApplicationServer) {
client := types.NewABCIApplicationClient(conn)
- // Write requests
+ // Construct request
+ rfb := types.RequestFinalizeBlock{Txs: make([][]byte, numDeliverTxs)}
for counter := 0; counter < numDeliverTxs; counter++ {
- // Send request
- response, err := client.DeliverTx(context.Background(), &types.RequestDeliverTx{Tx: []byte("test")})
- if err != nil {
- t.Fatalf("Error in GRPC DeliverTx: %v", err.Error())
- }
- counter++
- if response.Code != code.CodeTypeOK {
- t.Error("DeliverTx failed with ret_code", response.Code)
- }
- if counter > numDeliverTxs {
- t.Fatal("Too many DeliverTx responses")
- }
- t.Log("response", counter)
- if counter == numDeliverTxs {
- go func() {
- time.Sleep(time.Second * 1) // Wait for a bit to allow counter overflow
- }()
- }
+ rfb.Txs[counter] = []byte("test")
+ }
+ // Send request
+ response, err := client.FinalizeBlock(ctx, &rfb)
+ require.NoError(t, err, "Error in GRPC FinalizeBlock")
+ require.Equal(t, numDeliverTxs, len(response.TxResults), "Number of txs returned via GRPC doesn't match")
+ for _, tx := range response.TxResults {
+ require.Equal(t, tx.Code, code.CodeTypeOK, "Tx failed")
}
}
diff --git a/abci/example/kvstore/README.md b/abci/example/kvstore/README.md
index fee6e35dca..5eed47050d 100644
--- a/abci/example/kvstore/README.md
+++ b/abci/example/kvstore/README.md
@@ -12,7 +12,7 @@ The app has no replay protection (other than what the mempool provides).
## PersistentKVStoreApplication
The PersistentKVStoreApplication wraps the KVStoreApplication
-and provides two additional features:
+and provides three additional features:
1) persistence of state across app restarts (using Tendermint's ABCI-Handshake mechanism)
2) validator set changes
diff --git a/abci/example/kvstore/helpers.go b/abci/example/kvstore/helpers.go
index b70b541ea7..ae60f5d202 100644
--- a/abci/example/kvstore/helpers.go
+++ b/abci/example/kvstore/helpers.go
@@ -1,6 +1,8 @@
package kvstore
import (
+ "context"
+
"github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/dash/llmq"
@@ -26,11 +28,12 @@ func RandValidatorSetUpdate(cnt int) types.ValidatorSetUpdate {
// InitKVStore initializes the kvstore app with some data,
// which allows tests to pass and is fine as long as you
// don't make any tx that modify the validator state
-func InitKVStore(app *PersistentKVStoreApplication) {
+func InitKVStore(ctx context.Context, app *PersistentKVStoreApplication) error {
val := RandValidatorSetUpdate(1)
- app.InitChain(types.RequestInitChain{
+ _, err := app.InitChain(ctx, &types.RequestInitChain{
ValidatorSet: &val,
})
+ return err
}
func randNodeAddrs(n int) []string {
diff --git a/abci/example/kvstore/kvstore.go b/abci/example/kvstore/kvstore.go
index e2f7f34d28..c1ea46108c 100644
--- a/abci/example/kvstore/kvstore.go
+++ b/abci/example/kvstore/kvstore.go
@@ -2,17 +2,27 @@ package kvstore
import (
"bytes"
+ "context"
+ "encoding/base64"
"encoding/binary"
"encoding/json"
"fmt"
+ "strings"
+ "sync"
+ "github.com/gogo/protobuf/proto"
dbm "github.com/tendermint/tm-db"
"github.com/tendermint/tendermint/abci/example/code"
"github.com/tendermint/tendermint/abci/types"
+ "github.com/tendermint/tendermint/crypto"
+ "github.com/tendermint/tendermint/internal/libs/protoio"
+ "github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/version"
)
+const ValidatorSetUpdatePrefix string = "vsu:"
+
var (
stateKey = []byte("stateKey")
kvPairPrefixKey = []byte("kvPairKey:")
@@ -65,35 +75,72 @@ var _ types.Application = (*Application)(nil)
type Application struct {
types.BaseApplication
-
+ mu sync.Mutex
state State
RetainBlocks int64 // blocks to retain after commit (via ResponseCommit.RetainHeight)
+ logger log.Logger
+
+ // validator set update
+ valUpdatesRepo *repository
+ valSetUpdate types.ValidatorSetUpdate
+ valsIndex map[string]*types.ValidatorUpdate
}
func NewApplication() *Application {
- state := loadState(dbm.NewMemDB())
- return &Application{state: state}
+ db := dbm.NewMemDB()
+ return &Application{
+ logger: log.NewNopLogger(),
+ state: loadState(db),
+ valsIndex: make(map[string]*types.ValidatorUpdate),
+ valUpdatesRepo: &repository{db},
+ }
+}
+
+func (app *Application) InitChain(_ context.Context, req *types.RequestInitChain) (*types.ResponseInitChain, error) {
+ app.mu.Lock()
+ defer app.mu.Unlock()
+ err := app.setValSetUpdate(req.ValidatorSet)
+ if err != nil {
+ return nil, err
+ }
+ return &types.ResponseInitChain{}, nil
}
-func (app *Application) Info(req types.RequestInfo) (resInfo types.ResponseInfo) {
- return types.ResponseInfo{
+func (app *Application) Info(_ context.Context, req *types.RequestInfo) (*types.ResponseInfo, error) {
+ app.mu.Lock()
+ defer app.mu.Unlock()
+ return &types.ResponseInfo{
Data: fmt.Sprintf("{\"size\":%v}", app.state.Size),
Version: version.ABCIVersion,
AppVersion: ProtocolVersion,
LastBlockHeight: app.state.Height,
LastBlockAppHash: app.state.AppHash,
- }
+ }, nil
}
-// tx is either "key=value" or just arbitrary bytes
-func (app *Application) DeliverTx(req types.RequestDeliverTx) types.ResponseDeliverTx {
- var key, value string
+// tx is either "val:pubkey!power" or "key=value" or just arbitrary bytes
+func (app *Application) handleTx(tx []byte) *types.ExecTxResult {
+ if isValidatorSetUpdateTx(tx) {
+ err := app.execValidatorSetTx(tx)
+ if err != nil {
+ return &types.ExecTxResult{
+ Code: code.CodeTypeUnknownError,
+ Log: err.Error(),
+ }
+ }
+ return &types.ExecTxResult{Code: code.CodeTypeOK}
+ }
+
+ if isPrepareTx(tx) {
+ return app.execPrepareTx(tx)
+ }
- parts := bytes.Split(req.Tx, []byte("="))
+ var key, value string
+ parts := bytes.Split(tx, []byte("="))
if len(parts) == 2 {
key, value = string(parts[0]), string(parts[1])
} else {
- key, value = string(req.Tx), string(req.Tx)
+ key, value = string(tx), string(tx)
}
err := app.state.db.Set(prefixKey([]byte(key)), []byte(value))
@@ -114,14 +161,56 @@ func (app *Application) DeliverTx(req types.RequestDeliverTx) types.ResponseDeli
},
}
- return types.ResponseDeliverTx{Code: code.CodeTypeOK, Events: events}
+ return &types.ExecTxResult{Code: code.CodeTypeOK, Events: events}
}
-func (app *Application) CheckTx(req types.RequestCheckTx) types.ResponseCheckTx {
- return types.ResponseCheckTx{Code: code.CodeTypeOK, GasWanted: 1}
+func (app *Application) Close() error {
+ app.mu.Lock()
+ defer app.mu.Unlock()
+
+ return app.state.db.Close()
}
-func (app *Application) Commit() types.ResponseCommit {
+func (app *Application) FinalizeBlock(_ context.Context, req *types.RequestFinalizeBlock) (*types.ResponseFinalizeBlock, error) {
+ app.mu.Lock()
+ defer app.mu.Unlock()
+
+ // reset valset changes
+ app.valSetUpdate = types.ValidatorSetUpdate{}
+ app.valSetUpdate.ValidatorUpdates = make([]types.ValidatorUpdate, 0)
+
+ // Punish validators who committed equivocation.
+ for _, ev := range req.ByzantineValidators {
+ // TODO it seems this code is not needed to keep here
+ if ev.Type == types.MisbehaviorType_DUPLICATE_VOTE {
+ proTxHash := crypto.ProTxHash(ev.Validator.ProTxHash)
+ v, ok := app.valsIndex[proTxHash.String()]
+ if !ok {
+ return nil, fmt.Errorf("wanted to punish val %q but can't find it", proTxHash.ShortString())
+ }
+ v.Power = ev.Validator.Power - 1
+ }
+ }
+
+ respTxs := make([]*types.ExecTxResult, len(req.Txs))
+ for i, tx := range req.Txs {
+ respTxs[i] = app.handleTx(tx)
+ }
+
+ return &types.ResponseFinalizeBlock{
+ TxResults: respTxs,
+ ValidatorSetUpdate: proto.Clone(&app.valSetUpdate).(*types.ValidatorSetUpdate),
+ }, nil
+}
+
+func (*Application) CheckTx(_ context.Context, req *types.RequestCheckTx) (*types.ResponseCheckTx, error) {
+ return &types.ResponseCheckTx{Code: code.CodeTypeOK, GasWanted: 1}, nil
+}
+
+func (app *Application) Commit(_ context.Context) (*types.ResponseCommit, error) {
+ app.mu.Lock()
+ defer app.mu.Unlock()
+
// Using a memdb - just return the big endian size of the db
appHash := make([]byte, 32)
binary.PutVarint(appHash, app.state.Size)
@@ -129,52 +218,276 @@ func (app *Application) Commit() types.ResponseCommit {
app.state.Height++
saveState(app.state)
- resp := types.ResponseCommit{Data: appHash}
+ resp := &types.ResponseCommit{Data: appHash}
if app.RetainBlocks > 0 && app.state.Height >= app.RetainBlocks {
resp.RetainHeight = app.state.Height - app.RetainBlocks + 1
}
- return resp
+ return resp, nil
}
-// Returns an associated value or nil if missing.
-func (app *Application) Query(reqQuery types.RequestQuery) (resQuery types.ResponseQuery) {
+// Query returns an associated value or nil if missing.
+func (app *Application) Query(_ context.Context, reqQuery *types.RequestQuery) (*types.ResponseQuery, error) {
+ app.mu.Lock()
+ defer app.mu.Unlock()
+
switch reqQuery.Path {
+ case "/vsu":
+ vsu, err := app.valUpdatesRepo.get()
+ if err != nil {
+ return &types.ResponseQuery{
+ Code: code.CodeTypeUnknownError,
+ Log: err.Error(),
+ }, nil
+ }
+ data, err := encodeMsg(vsu)
+ if err != nil {
+ return &types.ResponseQuery{
+ Code: code.CodeTypeEncodingError,
+ Log: err.Error(),
+ }, nil
+ }
+ return &types.ResponseQuery{
+ Key: reqQuery.Data,
+ Value: data,
+ }, nil
case "/verify-chainlock":
- resQuery.Code = 0
-
- return resQuery
- default:
- if reqQuery.Prove {
- value, err := app.state.db.Get(prefixKey(reqQuery.Data))
- if err != nil {
- panic(err)
- }
- if value == nil {
- resQuery.Log = "does not exist"
- } else {
- resQuery.Log = "exists"
- }
- resQuery.Index = -1 // TODO make Proof return index
- resQuery.Key = reqQuery.Data
- resQuery.Value = value
- resQuery.Height = app.state.Height
-
- return
+ return &types.ResponseQuery{
+ Code: 0,
+ }, nil
+ case "/val":
+ vu, err := app.valUpdatesRepo.findBy(reqQuery.Data)
+ if err != nil {
+ return &types.ResponseQuery{
+ Code: code.CodeTypeUnknownError,
+ Log: err.Error(),
+ }, nil
+ }
+ value, err := encodeMsg(vu)
+ if err != nil {
+ return &types.ResponseQuery{
+ Code: code.CodeTypeEncodingError,
+ Log: err.Error(),
+ }, nil
}
+ return &types.ResponseQuery{
+ Key: reqQuery.Data,
+ Value: value,
+ }, nil
+ }
- resQuery.Key = reqQuery.Data
+ if reqQuery.Prove {
value, err := app.state.db.Get(prefixKey(reqQuery.Data))
if err != nil {
panic(err)
}
+
+ resQuery := types.ResponseQuery{
+ Index: -1,
+ Key: reqQuery.Data,
+ Value: value,
+ Height: app.state.Height,
+ }
+
if value == nil {
resQuery.Log = "does not exist"
} else {
resQuery.Log = "exists"
}
- resQuery.Value = value
- resQuery.Height = app.state.Height
- return resQuery
+ return &resQuery, nil
}
+
+ value, err := app.state.db.Get(prefixKey(reqQuery.Data))
+ if err != nil {
+ panic(err)
+ }
+
+ resQuery := types.ResponseQuery{
+ Key: reqQuery.Data,
+ Value: value,
+ Height: app.state.Height,
+ }
+
+ if value == nil {
+ resQuery.Log = "does not exist"
+ } else {
+ resQuery.Log = "exists"
+ }
+
+ return &resQuery, nil
+}
+
+func (app *Application) PrepareProposal(_ context.Context, req *types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error) {
+ app.mu.Lock()
+ defer app.mu.Unlock()
+
+ return &types.ResponsePrepareProposal{
+ TxRecords: app.substPrepareTx(req.Txs, req.MaxTxBytes),
+ }, nil
+}
+
+func (*Application) ProcessProposal(_ context.Context, req *types.RequestProcessProposal) (*types.ResponseProcessProposal, error) {
+ for _, tx := range req.Txs {
+ if len(tx) == 0 {
+ return &types.ResponseProcessProposal{Status: types.ResponseProcessProposal_REJECT}, nil
+ }
+ }
+ return &types.ResponseProcessProposal{Status: types.ResponseProcessProposal_ACCEPT}, nil
+}
+
+//---------------------------------------------
+// update validators
+
+func (app *Application) ValidatorSet() (*types.ValidatorSetUpdate, error) {
+ return app.valUpdatesRepo.get()
+}
+
+func (app *Application) execValidatorSetTx(tx []byte) error {
+ vsu, err := UnmarshalValidatorSetUpdate(tx)
+ if err != nil {
+ return err
+ }
+ err = app.setValSetUpdate(vsu)
+ if err != nil {
+ return err
+ }
+ app.valSetUpdate = *vsu
+ return nil
+}
+
+// MarshalValidatorSetUpdate encodes validator-set-update into protobuf, encode into base64 and add "vsu:" prefix
+func MarshalValidatorSetUpdate(vsu *types.ValidatorSetUpdate) ([]byte, error) {
+ pbData, err := proto.Marshal(vsu)
+ if err != nil {
+ return nil, err
+ }
+ return []byte(ValidatorSetUpdatePrefix + base64.StdEncoding.EncodeToString(pbData)), nil
+}
+
+// UnmarshalValidatorSetUpdate removes "vsu:" prefix and unmarshal a string into validator-set-update
+func UnmarshalValidatorSetUpdate(data []byte) (*types.ValidatorSetUpdate, error) {
+ l := len(ValidatorSetUpdatePrefix)
+ data, err := base64.StdEncoding.DecodeString(string(data[l:]))
+ if err != nil {
+ return nil, err
+ }
+ vsu := new(types.ValidatorSetUpdate)
+ err = proto.Unmarshal(data, vsu)
+ return vsu, err
+}
+
+type repository struct {
+ db dbm.DB
+}
+
+func (r *repository) set(vsu *types.ValidatorSetUpdate) error {
+ data, err := proto.Marshal(vsu)
+ if err != nil {
+ return err
+ }
+ return r.db.Set([]byte(ValidatorSetUpdatePrefix), data)
+}
+
+func (r *repository) get() (*types.ValidatorSetUpdate, error) {
+ data, err := r.db.Get([]byte(ValidatorSetUpdatePrefix))
+ if err != nil {
+ return nil, err
+ }
+ vsu := new(types.ValidatorSetUpdate)
+ err = proto.Unmarshal(data, vsu)
+ if err != nil {
+ return nil, err
+ }
+ return vsu, nil
+}
+
+func (r *repository) findBy(proTxHash crypto.ProTxHash) (*types.ValidatorUpdate, error) {
+ vsu, err := r.get()
+ if err != nil {
+ return nil, err
+ }
+ for _, vu := range vsu.ValidatorUpdates {
+ if bytes.Equal(vu.ProTxHash, proTxHash) {
+ return &vu, nil
+ }
+ }
+ return nil, err
+}
+
+func isValidatorSetUpdateTx(tx []byte) bool {
+ return strings.HasPrefix(string(tx), ValidatorSetUpdatePrefix)
+}
+
+func encodeMsg(data proto.Message) ([]byte, error) {
+ buf := bytes.NewBufferString("")
+ w := protoio.NewDelimitedWriter(buf)
+ _, err := w.WriteMsg(data)
+ if err != nil {
+ return nil, err
+ }
+ return buf.Bytes(), nil
+}
+
+// -----------------------------
+// prepare proposal machinery
+
+const PreparePrefix = "prepare"
+
+func isPrepareTx(tx []byte) bool {
+ return bytes.HasPrefix(tx, []byte(PreparePrefix))
+}
+
+// execPrepareTx is noop. tx data is considered as placeholder
+// and is substitute at the PrepareProposal.
+func (app *Application) execPrepareTx(tx []byte) *types.ExecTxResult {
+ // noop
+ return &types.ExecTxResult{}
+}
+
+// substPrepareTx substitutes all the transactions prefixed with 'prepare' in the
+// proposal for transactions with the prefix stripped.
+// It marks all of the original transactions as 'REMOVED' so that
+// Tendermint will remove them from its mempool.
+func (app *Application) substPrepareTx(blockData [][]byte, maxTxBytes int64) []*types.TxRecord {
+ trs := make([]*types.TxRecord, 0, len(blockData))
+ var removed []*types.TxRecord
+ var totalBytes int64
+ for _, tx := range blockData {
+ txMod := tx
+ action := types.TxRecord_UNMODIFIED
+ if isPrepareTx(tx) {
+ removed = append(removed, &types.TxRecord{
+ Tx: tx,
+ Action: types.TxRecord_REMOVED,
+ })
+ txMod = bytes.TrimPrefix(tx, []byte(PreparePrefix))
+ action = types.TxRecord_ADDED
+ }
+ totalBytes += int64(len(txMod))
+ if totalBytes > maxTxBytes {
+ break
+ }
+ trs = append(trs, &types.TxRecord{
+ Tx: txMod,
+ Action: action,
+ })
+ }
+
+ return append(trs, removed...)
+}
+
+func (app *Application) setValSetUpdate(valSetUpdate *types.ValidatorSetUpdate) error {
+ err := app.valUpdatesRepo.set(valSetUpdate)
+ if err != nil {
+ return err
+ }
+ app.valsIndex = make(map[string]*types.ValidatorUpdate)
+ for i, v := range valSetUpdate.ValidatorUpdates {
+ app.valsIndex[proTxHashString(v.ProTxHash)] = &valSetUpdate.ValidatorUpdates[i]
+ }
+ return nil
+}
+
+func proTxHashString(proTxHash crypto.ProTxHash) string {
+ return proTxHash.String()
}
diff --git a/abci/example/kvstore/kvstore_test.go b/abci/example/kvstore/kvstore_test.go
index a06c8a7912..922e3b25d4 100644
--- a/abci/example/kvstore/kvstore_test.go
+++ b/abci/example/kvstore/kvstore_test.go
@@ -4,10 +4,10 @@ import (
"bytes"
"context"
"fmt"
- "io/ioutil"
"sort"
"testing"
+ "github.com/fortytw2/leaktest"
"github.com/stretchr/testify/require"
abciclient "github.com/tendermint/tendermint/abci/client"
@@ -16,7 +16,6 @@ import (
"github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/service"
- tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
)
const (
@@ -24,38 +23,44 @@ const (
testValue = "def"
)
-var ctx = context.Background()
-
-func testKVStore(t *testing.T, app types.Application, tx []byte, key, value string) {
- req := types.RequestDeliverTx{Tx: tx}
- ar := app.DeliverTx(req)
- require.False(t, ar.IsErr(), ar)
+func testKVStore(ctx context.Context, t *testing.T, app types.Application, tx []byte, key, value string) {
+ req := &types.RequestFinalizeBlock{Txs: [][]byte{tx}}
+ ar, err := app.FinalizeBlock(ctx, req)
+ require.NoError(t, err)
+ require.Equal(t, 1, len(ar.TxResults))
+ require.False(t, ar.TxResults[0].IsErr())
// repeating tx doesn't raise error
- ar = app.DeliverTx(req)
- require.False(t, ar.IsErr(), ar)
+ ar, err = app.FinalizeBlock(ctx, req)
+ require.NoError(t, err)
+ require.Equal(t, 1, len(ar.TxResults))
+ require.False(t, ar.TxResults[0].IsErr())
// commit
- app.Commit()
+ _, err = app.Commit(ctx)
+ require.NoError(t, err)
- info := app.Info(types.RequestInfo{})
+ info, err := app.Info(ctx, &types.RequestInfo{})
+ require.NoError(t, err)
require.NotZero(t, info.LastBlockHeight)
// make sure query is fine
- resQuery := app.Query(types.RequestQuery{
+ resQuery, err := app.Query(ctx, &types.RequestQuery{
Path: "/store",
Data: []byte(key),
})
+ require.NoError(t, err)
require.Equal(t, code.CodeTypeOK, resQuery.Code)
require.Equal(t, key, string(resQuery.Key))
require.Equal(t, value, string(resQuery.Value))
require.EqualValues(t, info.LastBlockHeight, resQuery.Height)
// make sure proof is fine
- resQuery = app.Query(types.RequestQuery{
+ resQuery, err = app.Query(ctx, &types.RequestQuery{
Path: "/store",
Data: []byte(key),
Prove: true,
})
+ require.NoError(t, err)
require.EqualValues(t, code.CodeTypeOK, resQuery.Code)
require.Equal(t, key, string(resQuery.Key))
require.Equal(t, value, string(resQuery.Value))
@@ -63,43 +68,55 @@ func testKVStore(t *testing.T, app types.Application, tx []byte, key, value stri
}
func TestKVStoreKV(t *testing.T) {
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+
kvstore := NewApplication()
key := testKey
value := key
tx := []byte(key)
- testKVStore(t, kvstore, tx, key, value)
+ testKVStore(ctx, t, kvstore, tx, key, value)
value = testValue
tx = []byte(key + "=" + value)
- testKVStore(t, kvstore, tx, key, value)
+ testKVStore(ctx, t, kvstore, tx, key, value)
}
func TestPersistentKVStoreKV(t *testing.T) {
- dir, err := ioutil.TempDir("/tmp", "abci-kvstore-test") // TODO
- if err != nil {
- t.Fatal(err)
- }
- kvstore := NewPersistentKVStoreApplication(dir)
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+
+ dir := t.TempDir()
+ logger := log.NewNopLogger()
+
+ kvstore := NewPersistentKVStoreApplication(logger, dir)
key := testKey
value := key
tx := []byte(key)
- testKVStore(t, kvstore, tx, key, value)
+ testKVStore(ctx, t, kvstore, tx, key, value)
value = testValue
tx = []byte(key + "=" + value)
- testKVStore(t, kvstore, tx, key, value)
+ testKVStore(ctx, t, kvstore, tx, key, value)
}
func TestPersistentKVStoreInfo(t *testing.T) {
- dir, err := ioutil.TempDir("/tmp", "abci-kvstore-test") // TODO
- if err != nil {
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+ dir := t.TempDir()
+ logger := log.NewNopLogger()
+
+ kvstore := NewPersistentKVStoreApplication(logger, dir)
+ if err := InitKVStore(ctx, kvstore); err != nil {
t.Fatal(err)
}
- kvstore := NewPersistentKVStoreApplication(dir)
- InitKVStore(kvstore)
height := int64(0)
- resInfo := kvstore.Info(types.RequestInfo{})
+ resInfo, err := kvstore.Info(ctx, &types.RequestInfo{})
+ if err != nil {
+ t.Fatal(err)
+ }
+
if resInfo.LastBlockHeight != height {
t.Fatalf("expected height of %d, got %d", height, resInfo.LastBlockHeight)
}
@@ -107,14 +124,19 @@ func TestPersistentKVStoreInfo(t *testing.T) {
// make and apply block
height = int64(1)
hash := []byte("foo")
- header := tmproto.Header{
- Height: height,
+ if _, err := kvstore.FinalizeBlock(ctx, &types.RequestFinalizeBlock{Hash: hash, Height: height}); err != nil {
+ t.Fatal(err)
}
- kvstore.BeginBlock(types.RequestBeginBlock{Hash: hash, Header: header})
- kvstore.EndBlock(types.RequestEndBlock{Height: header.Height})
- kvstore.Commit()
- resInfo = kvstore.Info(types.RequestInfo{})
+ if _, err := kvstore.Commit(ctx); err != nil {
+ t.Fatal(err)
+
+ }
+
+ resInfo, err = kvstore.Info(ctx, &types.RequestInfo{})
+ if err != nil {
+ t.Fatal(err)
+ }
if resInfo.LastBlockHeight != height {
t.Fatalf("expected height of %d, got %d", height, resInfo.LastBlockHeight)
}
@@ -123,11 +145,10 @@ func TestPersistentKVStoreInfo(t *testing.T) {
// add a validator, remove a validator, update a validator
func TestValUpdates(t *testing.T) {
- dir, err := ioutil.TempDir("/tmp", "abci-kvstore-test") // TODO
- if err != nil {
- t.Fatal(err)
- }
- kvstore := NewPersistentKVStoreApplication(dir)
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+
+ kvstore := NewApplication()
// init with some validators
total := 10
@@ -136,26 +157,30 @@ func TestValUpdates(t *testing.T) {
initVals := RandValidatorSetUpdate(nInit)
// initialize with the first nInit
- kvstore.InitChain(types.RequestInitChain{
+ _, err := kvstore.InitChain(ctx, &types.RequestInitChain{
ValidatorSet: &initVals,
})
+ if err != nil {
+ t.Fatal(err)
+ }
kvVals, err := kvstore.ValidatorSet()
require.NoError(t, err)
- valSetEqualTest(t, *kvVals, initVals)
+ valSetEqualTest(t, kvVals, &initVals)
tx, err := MarshalValidatorSetUpdate(&fullVals)
require.NoError(t, err)
// change the validator set to the full validator set
- makeApplyBlock(t, kvstore, 1, fullVals, tx)
+ makeApplyBlock(ctx, t, kvstore, 1, fullVals, tx)
kvVals, err = kvstore.ValidatorSet()
require.NoError(t, err)
- valSetEqualTest(t, *kvVals, fullVals)
+ valSetEqualTest(t, kvVals, &fullVals)
}
func makeApplyBlock(
+ ctx context.Context,
t *testing.T,
kvstore types.Application,
heightInt int,
@@ -164,24 +189,23 @@ func makeApplyBlock(
// make and apply block
height := int64(heightInt)
hash := []byte("foo")
- header := tmproto.Header{
+ resFinalizeBlock, err := kvstore.FinalizeBlock(ctx, &types.RequestFinalizeBlock{
+ Hash: hash,
Height: height,
- }
-
- kvstore.BeginBlock(types.RequestBeginBlock{Hash: hash, Header: header})
- for i, tx := range txs {
- r := kvstore.DeliverTx(types.RequestDeliverTx{Tx: tx})
- require.False(t, r.IsErr(), "i=%d, tx=%s, err=%s", i, tx, r.String())
- }
- resEndBlock := kvstore.EndBlock(types.RequestEndBlock{Height: header.Height})
- kvstore.Commit()
+ Txs: txs,
+ })
+ require.NoError(t, err)
- valSetEqualTest(t, diff, *resEndBlock.ValidatorSetUpdate)
+ _, err = kvstore.Commit(ctx)
+ require.NoError(t, err)
+ valSetEqualTest(t, &diff, resFinalizeBlock.ValidatorSetUpdate)
}
// order doesn't matter
func valsEqualTest(t *testing.T, vals1, vals2 []types.ValidatorUpdate) {
+ t.Helper()
+
require.Equal(t, len(vals1), len(vals2), "vals dont match in len. got %d, expected %d", len(vals2), len(vals1))
sort.Sort(types.ValidatorUpdates(vals1))
sort.Sort(types.ValidatorUpdates(vals2))
@@ -189,153 +213,162 @@ func valsEqualTest(t *testing.T, vals1, vals2 []types.ValidatorUpdate) {
v2 := vals2[i]
if !v1.PubKey.Equal(v2.PubKey) ||
v1.Power != v2.Power {
- t.Fatalf("vals dont match at index %d. got %X/%d , expected %X/%d", i, *v2.PubKey, v2.Power, *v1.PubKey, v1.Power)
+ t.Fatalf("vals dont match at index %d. got %X/%d , expected %X/%d", i, v2.PubKey, v2.Power, v1.PubKey, v1.Power)
}
}
}
-func valSetEqualTest(t *testing.T, vals1, vals2 types.ValidatorSetUpdate) {
+func valSetEqualTest(t *testing.T, vals1, vals2 *types.ValidatorSetUpdate) {
+ t.Helper()
+
valsEqualTest(t, vals1.ValidatorUpdates, vals2.ValidatorUpdates)
- if !vals1.ThresholdPublicKey.Equal(vals2.ThresholdPublicKey) {
- t.Fatalf("val set threshold public key did not match. got %X, expected %X",
- vals1.ThresholdPublicKey, vals2.ThresholdPublicKey)
- }
- if !bytes.Equal(vals1.QuorumHash, vals2.QuorumHash) {
- t.Fatalf("val set quorum hash did not match. got %X, expected %X",
- vals1.QuorumHash, vals2.QuorumHash)
- }
+ require.True(t,
+ vals1.ThresholdPublicKey.Equal(vals2.ThresholdPublicKey),
+ "val set threshold public key did not match. got %X, expected %X",
+ vals1.ThresholdPublicKey, vals2.ThresholdPublicKey,
+ )
+ require.True(t,
+ bytes.Equal(vals1.QuorumHash, vals2.QuorumHash),
+ "val set quorum hash did not match. got %X, expected %X",
+ vals1.QuorumHash, vals2.QuorumHash,
+ )
}
-func makeSocketClientServer(app types.Application, name string) (abciclient.Client, service.Service, error) {
+func makeSocketClientServer(
+ ctx context.Context,
+ t *testing.T,
+ logger log.Logger,
+ app types.Application,
+ name string,
+) (abciclient.Client, service.Service, error) {
+ t.Helper()
+
+ ctx, cancel := context.WithCancel(ctx)
+ t.Cleanup(cancel)
+ t.Cleanup(leaktest.Check(t))
+
// Start the listener
socket := fmt.Sprintf("unix://%s.sock", name)
- logger := log.TestingLogger()
- server := abciserver.NewSocketServer(socket, app)
- server.SetLogger(logger.With("module", "abci-server"))
- if err := server.Start(); err != nil {
+ server := abciserver.NewSocketServer(logger.With("module", "abci-server"), socket, app)
+ if err := server.Start(ctx); err != nil {
+ cancel()
return nil, nil, err
}
// Connect to the socket
- client := abciclient.NewSocketClient(socket, false)
- client.SetLogger(logger.With("module", "abci-client"))
- if err := client.Start(); err != nil {
- if err = server.Stop(); err != nil {
- return nil, nil, err
- }
+ client := abciclient.NewSocketClient(logger.With("module", "abci-client"), socket, false)
+ if err := client.Start(ctx); err != nil {
+ cancel()
return nil, nil, err
}
return client, server, nil
}
-func makeGRPCClientServer(app types.Application, name string) (abciclient.Client, service.Service, error) {
+func makeGRPCClientServer(
+ ctx context.Context,
+ t *testing.T,
+ logger log.Logger,
+ app types.Application,
+ name string,
+) (abciclient.Client, service.Service, error) {
+ ctx, cancel := context.WithCancel(ctx)
+ t.Cleanup(cancel)
+ t.Cleanup(leaktest.Check(t))
+
// Start the listener
socket := fmt.Sprintf("unix://%s.sock", name)
- logger := log.TestingLogger()
- gapp := types.NewGRPCApplication(app)
- server := abciserver.NewGRPCServer(socket, gapp)
- server.SetLogger(logger.With("module", "abci-server"))
- if err := server.Start(); err != nil {
+ server := abciserver.NewGRPCServer(logger.With("module", "abci-server"), socket, app)
+
+ if err := server.Start(ctx); err != nil {
+ cancel()
return nil, nil, err
}
- client := abciclient.NewGRPCClient(socket, true)
- client.SetLogger(logger.With("module", "abci-client"))
- if err := client.Start(); err != nil {
- if err := server.Stop(); err != nil {
- return nil, nil, err
- }
+ client := abciclient.NewGRPCClient(logger.With("module", "abci-client"), socket, true)
+
+ if err := client.Start(ctx); err != nil {
+ cancel()
return nil, nil, err
}
return client, server, nil
}
func TestClientServer(t *testing.T) {
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+ logger := log.NewNopLogger()
+
// set up socket app
kvstore := NewApplication()
- client, server, err := makeSocketClientServer(kvstore, "kvstore-socket")
+ client, server, err := makeSocketClientServer(ctx, t, logger, kvstore, "kvstore-socket")
require.NoError(t, err)
- t.Cleanup(func() {
- if err := server.Stop(); err != nil {
- t.Error(err)
- }
- })
- t.Cleanup(func() {
- if err := client.Stop(); err != nil {
- t.Error(err)
- }
- })
+ t.Cleanup(func() { cancel(); server.Wait() })
+ t.Cleanup(func() { cancel(); client.Wait() })
- runClientTests(t, client)
+ runClientTests(ctx, t, client)
// set up grpc app
kvstore = NewApplication()
- gclient, gserver, err := makeGRPCClientServer(kvstore, "/tmp/kvstore-grpc")
+ gclient, gserver, err := makeGRPCClientServer(ctx, t, logger, kvstore, "/tmp/kvstore-grpc")
require.NoError(t, err)
- t.Cleanup(func() {
- if err := gserver.Stop(); err != nil {
- t.Error(err)
- }
- })
- t.Cleanup(func() {
- if err := gclient.Stop(); err != nil {
- t.Error(err)
- }
- })
+ t.Cleanup(func() { cancel(); gserver.Wait() })
+ t.Cleanup(func() { cancel(); gclient.Wait() })
- runClientTests(t, gclient)
+ runClientTests(ctx, t, gclient)
}
-func runClientTests(t *testing.T, client abciclient.Client) {
+func runClientTests(ctx context.Context, t *testing.T, client abciclient.Client) {
// run some tests....
key := testKey
value := key
tx := []byte(key)
- testClient(t, client, tx, key, value)
+ testClient(ctx, t, client, tx, key, value)
value = testValue
tx = []byte(key + "=" + value)
- testClient(t, client, tx, key, value)
+ testClient(ctx, t, client, tx, key, value)
}
-func testClient(t *testing.T, app abciclient.Client, tx []byte, key, value string) {
- ar, err := app.DeliverTxSync(ctx, types.RequestDeliverTx{Tx: tx})
+func testClient(ctx context.Context, t *testing.T, app abciclient.Client, tx []byte, key, value string) {
+ ar, err := app.FinalizeBlock(ctx, &types.RequestFinalizeBlock{Txs: [][]byte{tx}})
require.NoError(t, err)
- require.False(t, ar.IsErr(), ar)
- // repeating tx doesn't raise error
- ar, err = app.DeliverTxSync(ctx, types.RequestDeliverTx{Tx: tx})
+ require.Equal(t, 1, len(ar.TxResults))
+ require.False(t, ar.TxResults[0].IsErr())
+ // repeating FinalizeBlock doesn't raise error
+ ar, err = app.FinalizeBlock(ctx, &types.RequestFinalizeBlock{Txs: [][]byte{tx}})
require.NoError(t, err)
- require.False(t, ar.IsErr(), ar)
+ require.Equal(t, 1, len(ar.TxResults))
+ require.False(t, ar.TxResults[0].IsErr())
// commit
- _, err = app.CommitSync(ctx)
+ _, err = app.Commit(ctx)
require.NoError(t, err)
- info, err := app.InfoSync(ctx, types.RequestInfo{})
+ info, err := app.Info(ctx, &types.RequestInfo{})
require.NoError(t, err)
require.NotZero(t, info.LastBlockHeight)
// make sure query is fine
- resQuery, err := app.QuerySync(ctx, types.RequestQuery{
+ resQuery, err := app.Query(ctx, &types.RequestQuery{
Path: "/store",
Data: []byte(key),
})
- require.Nil(t, err)
+ require.NoError(t, err)
require.Equal(t, code.CodeTypeOK, resQuery.Code)
require.Equal(t, key, string(resQuery.Key))
require.Equal(t, value, string(resQuery.Value))
require.EqualValues(t, info.LastBlockHeight, resQuery.Height)
// make sure proof is fine
- resQuery, err = app.QuerySync(ctx, types.RequestQuery{
+ resQuery, err = app.Query(ctx, &types.RequestQuery{
Path: "/store",
Data: []byte(key),
Prove: true,
})
- require.Nil(t, err)
+ require.NoError(t, err)
require.Equal(t, code.CodeTypeOK, resQuery.Code)
require.Equal(t, key, string(resQuery.Key))
require.Equal(t, value, string(resQuery.Value))
diff --git a/abci/example/kvstore/persistent_kvstore.go b/abci/example/kvstore/persistent_kvstore.go
index 7e418beb47..f3d39d49a0 100644
--- a/abci/example/kvstore/persistent_kvstore.go
+++ b/abci/example/kvstore/persistent_kvstore.go
@@ -1,245 +1,42 @@
package kvstore
import (
- "bytes"
- "encoding/base64"
- "strings"
+ "context"
- "github.com/gogo/protobuf/proto"
dbm "github.com/tendermint/tm-db"
- "github.com/tendermint/tendermint/abci/example/code"
"github.com/tendermint/tendermint/abci/types"
- "github.com/tendermint/tendermint/internal/libs/protoio"
- "github.com/tendermint/tendermint/internal/libs/sync"
"github.com/tendermint/tendermint/libs/log"
)
-const ValidatorSetUpdatePrefix string = "vsu:"
-
//-----------------------------------------
var _ types.Application = (*PersistentKVStoreApplication)(nil)
type PersistentKVStoreApplication struct {
- mtx sync.Mutex
- app *Application
- logger log.Logger
-
- valUpdatesRepo *repository
- ValidatorSetUpdates types.ValidatorSetUpdate
+ *Application
}
-func NewPersistentKVStoreApplication(dbDir string) *PersistentKVStoreApplication {
- const name = "kvstore"
- db, err := dbm.NewGoLevelDB(name, dbDir)
+func NewPersistentKVStoreApplication(logger log.Logger, dbDir string) *PersistentKVStoreApplication {
+ db, err := dbm.NewGoLevelDB("kvstore", dbDir)
if err != nil {
panic(err)
}
- return &PersistentKVStoreApplication{
- app: &Application{state: loadState(db)},
- logger: log.NewNopLogger(),
-
- valUpdatesRepo: &repository{db: db},
- }
-}
-
-func (app *PersistentKVStoreApplication) Close() error {
- return app.app.state.db.Close()
-}
-
-func (app *PersistentKVStoreApplication) SetLogger(l log.Logger) {
- app.logger = l
-}
-
-func (app *PersistentKVStoreApplication) Info(req types.RequestInfo) types.ResponseInfo {
- res := app.app.Info(req)
- res.LastBlockHeight = app.app.state.Height
- res.LastBlockAppHash = app.app.state.AppHash
- return res
-}
-
-// DeliverTx will deliver a tx which is either "val:proTxHash!pubkey!power" or "key=value" or just arbitrary bytes
-func (app *PersistentKVStoreApplication) DeliverTx(req types.RequestDeliverTx) types.ResponseDeliverTx {
- app.mtx.Lock()
- defer app.mtx.Unlock()
- if isValidatorSetUpdateTx(req.Tx) {
- err := app.execValidatorSetTx(req.Tx)
- if err != nil {
- return types.ResponseDeliverTx{
- Code: code.CodeTypeUnknownError,
- Log: err.Error(),
- }
- }
- return types.ResponseDeliverTx{Code: code.CodeTypeOK}
- }
- return app.app.DeliverTx(req)
-}
-
-func (app *PersistentKVStoreApplication) CheckTx(req types.RequestCheckTx) types.ResponseCheckTx {
- return app.app.CheckTx(req)
-}
-// Commit makes a commit in application's state
-func (app *PersistentKVStoreApplication) Commit() types.ResponseCommit {
- return app.app.Commit()
-}
-
-// Query when path=/val and data={validator address}, returns the validator update (types.ValidatorUpdate) varint encoded.
-// For any other path, returns an associated value or nil if missing.
-func (app *PersistentKVStoreApplication) Query(reqQuery types.RequestQuery) (resQuery types.ResponseQuery) {
- switch reqQuery.Path {
- case "/vsu":
- vsu, err := app.valUpdatesRepo.get()
- if err != nil {
- return types.ResponseQuery{
- Code: code.CodeTypeUnknownError,
- Log: err.Error(),
- }
- }
- data, err := encodeMsg(vsu)
- if err != nil {
- return types.ResponseQuery{
- Code: code.CodeTypeEncodingError,
- Log: err.Error(),
- }
- }
- resQuery.Key = reqQuery.Data
- resQuery.Value = data
- return
- case "/verify-chainlock":
- resQuery.Code = 0
- return resQuery
- default:
- return app.app.Query(reqQuery)
- }
-}
-
-// InitChain saves the validators in the merkle tree
-func (app *PersistentKVStoreApplication) InitChain(req types.RequestInitChain) types.ResponseInitChain {
- err := app.valUpdatesRepo.set(req.ValidatorSet)
- if err != nil {
- app.logger.Error("error updating validators", "err", err)
- return types.ResponseInitChain{}
- }
- return types.ResponseInitChain{}
-}
-
-// BeginBlock tracks the block hash and header information
-func (app *PersistentKVStoreApplication) BeginBlock(req types.RequestBeginBlock) types.ResponseBeginBlock {
- app.mtx.Lock()
- defer app.mtx.Unlock()
-
- // reset valset changes
- app.ValidatorSetUpdates.ValidatorUpdates = make([]types.ValidatorUpdate, 0)
-
- return types.ResponseBeginBlock{}
-}
-
-// EndBlock updates the validator set
-func (app *PersistentKVStoreApplication) EndBlock(_ types.RequestEndBlock) types.ResponseEndBlock {
- app.mtx.Lock()
- defer app.mtx.Unlock()
- c := proto.Clone(&app.ValidatorSetUpdates).(*types.ValidatorSetUpdate)
- return types.ResponseEndBlock{ValidatorSetUpdate: c}
-}
-
-func (app *PersistentKVStoreApplication) ListSnapshots(
- req types.RequestListSnapshots) types.ResponseListSnapshots {
- return types.ResponseListSnapshots{}
-}
-
-func (app *PersistentKVStoreApplication) LoadSnapshotChunk(
- req types.RequestLoadSnapshotChunk) types.ResponseLoadSnapshotChunk {
- return types.ResponseLoadSnapshotChunk{}
-}
-
-func (app *PersistentKVStoreApplication) OfferSnapshot(
- req types.RequestOfferSnapshot) types.ResponseOfferSnapshot {
- return types.ResponseOfferSnapshot{Result: types.ResponseOfferSnapshot_ABORT}
-}
-
-func (app *PersistentKVStoreApplication) ApplySnapshotChunk(
- req types.RequestApplySnapshotChunk) types.ResponseApplySnapshotChunk {
- return types.ResponseApplySnapshotChunk{Result: types.ResponseApplySnapshotChunk_ABORT}
-}
-
-//---------------------------------------------
-// update validators
-
-func (app *PersistentKVStoreApplication) ValidatorSet() (*types.ValidatorSetUpdate, error) {
- return app.valUpdatesRepo.get()
-}
-
-func (app *PersistentKVStoreApplication) execValidatorSetTx(tx []byte) error {
- vsu, err := UnmarshalValidatorSetUpdate(tx)
- if err != nil {
- return err
- }
- err = app.valUpdatesRepo.set(vsu)
- if err != nil {
- return err
- }
- app.ValidatorSetUpdates = *vsu
- return nil
-}
-
-// MarshalValidatorSetUpdate encodes validator-set-update into protobuf, encode into base64 and add "vsu:" prefix
-func MarshalValidatorSetUpdate(vsu *types.ValidatorSetUpdate) ([]byte, error) {
- pbData, err := proto.Marshal(vsu)
- if err != nil {
- return nil, err
- }
- return []byte(ValidatorSetUpdatePrefix + base64.StdEncoding.EncodeToString(pbData)), nil
-}
-
-// UnmarshalValidatorSetUpdate removes "vsu:" prefix and unmarshal a string into validator-set-update
-func UnmarshalValidatorSetUpdate(data []byte) (*types.ValidatorSetUpdate, error) {
- l := len(ValidatorSetUpdatePrefix)
- data, err := base64.StdEncoding.DecodeString(string(data[l:]))
- if err != nil {
- return nil, err
- }
- vsu := new(types.ValidatorSetUpdate)
- err = proto.Unmarshal(data, vsu)
- return vsu, err
-}
-
-type repository struct {
- db dbm.DB
-}
-
-func (r *repository) set(vsu *types.ValidatorSetUpdate) error {
- data, err := proto.Marshal(vsu)
- if err != nil {
- return err
- }
- return r.db.Set([]byte(ValidatorSetUpdatePrefix), data)
-}
-
-func (r *repository) get() (*types.ValidatorSetUpdate, error) {
- data, err := r.db.Get([]byte(ValidatorSetUpdatePrefix))
- if err != nil {
- return nil, err
- }
- vsu := new(types.ValidatorSetUpdate)
- err = proto.Unmarshal(data, vsu)
- if err != nil {
- return nil, err
+ return &PersistentKVStoreApplication{
+ Application: &Application{
+ state: loadState(db),
+ logger: logger,
+ valsIndex: make(map[string]*types.ValidatorUpdate),
+ valUpdatesRepo: &repository{db},
+ },
}
- return vsu, nil
}
-func isValidatorSetUpdateTx(tx []byte) bool {
- return strings.HasPrefix(string(tx), ValidatorSetUpdatePrefix)
+func (app *PersistentKVStoreApplication) OfferSnapshot(_ context.Context, req *types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error) {
+ return &types.ResponseOfferSnapshot{Result: types.ResponseOfferSnapshot_ABORT}, nil
}
-func encodeMsg(data proto.Message) ([]byte, error) {
- buf := bytes.NewBufferString("")
- w := protoio.NewDelimitedWriter(buf)
- _, err := w.WriteMsg(data)
- if err != nil {
- return nil, err
- }
- return buf.Bytes(), nil
+func (app *PersistentKVStoreApplication) ApplySnapshotChunk(_ context.Context, req *types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
+ return &types.ResponseApplySnapshotChunk{Result: types.ResponseApplySnapshotChunk_ABORT}, nil
}
diff --git a/abci/server/grpc_server.go b/abci/server/grpc_server.go
index 503f0b64f1..0dfee8169d 100644
--- a/abci/server/grpc_server.go
+++ b/abci/server/grpc_server.go
@@ -1,61 +1,83 @@
package server
import (
+ "context"
"net"
"google.golang.org/grpc"
"github.com/tendermint/tendermint/abci/types"
+ "github.com/tendermint/tendermint/libs/log"
tmnet "github.com/tendermint/tendermint/libs/net"
"github.com/tendermint/tendermint/libs/service"
)
type GRPCServer struct {
service.BaseService
+ logger log.Logger
- proto string
- addr string
- listener net.Listener
- server *grpc.Server
+ proto string
+ addr string
+ server *grpc.Server
- app types.ABCIApplicationServer
+ app types.Application
}
// NewGRPCServer returns a new gRPC ABCI server
-func NewGRPCServer(protoAddr string, app types.ABCIApplicationServer) service.Service {
+func NewGRPCServer(logger log.Logger, protoAddr string, app types.Application) service.Service {
proto, addr := tmnet.ProtocolAndAddress(protoAddr)
s := &GRPCServer{
- proto: proto,
- addr: addr,
- listener: nil,
- app: app,
+ logger: logger,
+ proto: proto,
+ addr: addr,
+ app: app,
}
- s.BaseService = *service.NewBaseService(nil, "ABCIServer", s)
+ s.BaseService = *service.NewBaseService(logger, "ABCIServer", s)
return s
}
// OnStart starts the gRPC service.
-func (s *GRPCServer) OnStart() error {
-
+func (s *GRPCServer) OnStart(ctx context.Context) error {
ln, err := net.Listen(s.proto, s.addr)
if err != nil {
return err
}
- s.listener = ln
s.server = grpc.NewServer()
- types.RegisterABCIApplicationServer(s.server, s.app)
+ types.RegisterABCIApplicationServer(s.server, &gRPCApplication{Application: s.app})
- s.Logger.Info("Listening", "proto", s.proto, "addr", s.addr)
+ s.logger.Info("Listening", "proto", s.proto, "addr", s.addr)
go func() {
- if err := s.server.Serve(s.listener); err != nil {
- s.Logger.Error("Error serving gRPC server", "err", err)
+ go func() {
+ <-ctx.Done()
+ s.server.GracefulStop()
+ }()
+
+ if err := s.server.Serve(ln); err != nil {
+ s.logger.Error("error serving gRPC server", "err", err)
}
}()
return nil
}
// OnStop stops the gRPC server.
-func (s *GRPCServer) OnStop() {
- s.server.Stop()
+func (s *GRPCServer) OnStop() { s.server.Stop() }
+
+//-------------------------------------------------------
+
+// gRPCApplication is a gRPC shim for Application
+type gRPCApplication struct {
+ types.Application
+}
+
+func (app *gRPCApplication) Echo(_ context.Context, req *types.RequestEcho) (*types.ResponseEcho, error) {
+ return &types.ResponseEcho{Message: req.Message}, nil
+}
+
+func (app *gRPCApplication) Flush(_ context.Context, req *types.RequestFlush) (*types.ResponseFlush, error) {
+ return &types.ResponseFlush{}, nil
+}
+
+func (app *gRPCApplication) Commit(ctx context.Context, req *types.RequestCommit) (*types.ResponseCommit, error) {
+ return app.Application.Commit(ctx)
}
diff --git a/abci/server/server.go b/abci/server/server.go
index 6dd13ad020..0e0173ca65 100644
--- a/abci/server/server.go
+++ b/abci/server/server.go
@@ -12,17 +12,18 @@ import (
"fmt"
"github.com/tendermint/tendermint/abci/types"
+ "github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/service"
)
-func NewServer(protoAddr, transport string, app types.Application) (service.Service, error) {
+func NewServer(logger log.Logger, protoAddr, transport string, app types.Application) (service.Service, error) {
var s service.Service
var err error
switch transport {
case "socket":
- s = NewSocketServer(protoAddr, app)
+ s = NewSocketServer(logger, protoAddr, app)
case "grpc":
- s = NewGRPCServer(protoAddr, types.NewGRPCApplication(app))
+ s = NewGRPCServer(logger, protoAddr, app)
default:
err = fmt.Errorf("unknown server type %s", transport)
}
diff --git a/abci/server/socket_server.go b/abci/server/socket_server.go
index 85539645bf..570ecfb4e7 100644
--- a/abci/server/socket_server.go
+++ b/abci/server/socket_server.go
@@ -2,15 +2,16 @@ package server
import (
"bufio"
+ "context"
+ "errors"
"fmt"
"io"
"net"
- "os"
"runtime"
+ "sync"
"github.com/tendermint/tendermint/abci/types"
- tmsync "github.com/tendermint/tendermint/internal/libs/sync"
- tmlog "github.com/tendermint/tendermint/libs/log"
+ "github.com/tendermint/tendermint/libs/log"
tmnet "github.com/tendermint/tendermint/libs/net"
"github.com/tendermint/tendermint/libs/service"
)
@@ -19,236 +20,298 @@ import (
type SocketServer struct {
service.BaseService
- isLoggerSet bool
+ logger log.Logger
proto string
addr string
listener net.Listener
- connsMtx tmsync.Mutex
- conns map[int]net.Conn
+ connsMtx sync.Mutex
+ connsClose map[int]func()
nextConnID int
- appMtx tmsync.Mutex
- app types.Application
+ app types.Application
}
-func NewSocketServer(protoAddr string, app types.Application) service.Service {
+func NewSocketServer(logger log.Logger, protoAddr string, app types.Application) service.Service {
proto, addr := tmnet.ProtocolAndAddress(protoAddr)
s := &SocketServer{
- proto: proto,
- addr: addr,
- listener: nil,
- app: app,
- conns: make(map[int]net.Conn),
+ logger: logger,
+ proto: proto,
+ addr: addr,
+ listener: nil,
+ app: app,
+ connsClose: make(map[int]func()),
}
- s.BaseService = *service.NewBaseService(nil, "ABCIServer", s)
+ s.BaseService = *service.NewBaseService(logger, "ABCIServer", s)
return s
}
-func (s *SocketServer) SetLogger(l tmlog.Logger) {
- s.BaseService.SetLogger(l)
- s.isLoggerSet = true
-}
-
-func (s *SocketServer) OnStart() error {
+func (s *SocketServer) OnStart(ctx context.Context) error {
ln, err := net.Listen(s.proto, s.addr)
if err != nil {
return err
}
s.listener = ln
- go s.acceptConnectionsRoutine()
+ go s.acceptConnectionsRoutine(ctx)
return nil
}
func (s *SocketServer) OnStop() {
if err := s.listener.Close(); err != nil {
- s.Logger.Error("Error closing listener", "err", err)
+ s.logger.Error("error closing listener", "err", err)
}
s.connsMtx.Lock()
defer s.connsMtx.Unlock()
- for id, conn := range s.conns {
- delete(s.conns, id)
- if err := conn.Close(); err != nil {
- s.Logger.Error("Error closing connection", "id", id, "conn", conn, "err", err)
- }
+
+ for _, closer := range s.connsClose {
+ closer()
}
}
-func (s *SocketServer) addConn(conn net.Conn) int {
+func (s *SocketServer) addConn(closer func()) int {
s.connsMtx.Lock()
defer s.connsMtx.Unlock()
connID := s.nextConnID
s.nextConnID++
- s.conns[connID] = conn
-
+ s.connsClose[connID] = closer
return connID
}
// deletes conn even if close errs
-func (s *SocketServer) rmConn(connID int) error {
+func (s *SocketServer) rmConn(connID int) {
s.connsMtx.Lock()
defer s.connsMtx.Unlock()
-
- conn, ok := s.conns[connID]
- if !ok {
- return fmt.Errorf("connection %d does not exist", connID)
+ if closer, ok := s.connsClose[connID]; ok {
+ closer()
+ delete(s.connsClose, connID)
}
-
- delete(s.conns, connID)
- return conn.Close()
}
-func (s *SocketServer) acceptConnectionsRoutine() {
+func (s *SocketServer) acceptConnectionsRoutine(ctx context.Context) {
for {
+ if ctx.Err() != nil {
+ return
+ }
+
// Accept a connection
- s.Logger.Info("Waiting for new connection...")
+ s.logger.Info("Waiting for new connection...")
conn, err := s.listener.Accept()
if err != nil {
if !s.IsRunning() {
return // Ignore error from listener closing.
}
- s.Logger.Error("Failed to accept connection", "err", err)
+ s.logger.Error("Failed to accept connection", "err", err)
continue
}
- s.Logger.Info("Accepted a new connection")
+ cctx, ccancel := context.WithCancel(ctx)
+ connID := s.addConn(ccancel)
- connID := s.addConn(conn)
+ s.logger.Info("Accepted a new connection", "id", connID)
- closeConn := make(chan error, 2) // Push to signal connection closed
responses := make(chan *types.Response, 1000) // A channel to buffer responses
+ once := &sync.Once{}
+ closer := func(err error) {
+ ccancel()
+ once.Do(func() {
+ if cerr := conn.Close(); err != nil {
+ s.logger.Error("error closing connection",
+ "id", connID,
+ "close_err", cerr,
+ "err", err)
+ }
+ s.rmConn(connID)
+
+ switch {
+ case errors.Is(err, context.Canceled):
+ s.logger.Error("Connection terminated",
+ "id", connID,
+ "err", err)
+ case errors.Is(err, context.DeadlineExceeded):
+ s.logger.Error("Connection encountered timeout",
+ "id", connID,
+ "err", err)
+ case errors.Is(err, io.EOF):
+ s.logger.Error("Connection was closed by client",
+ "id", connID)
+ case err != nil:
+ s.logger.Error("Connection error",
+ "id", connID,
+ "err", err)
+ default:
+ s.logger.Error("Connection was closed",
+ "id", connID)
+ }
+ })
+ }
+
// Read requests from conn and deal with them
- go s.handleRequests(closeConn, conn, responses)
+ go s.handleRequests(cctx, closer, conn, responses)
// Pull responses from 'responses' and write them to conn.
- go s.handleResponses(closeConn, conn, responses)
-
- // Wait until signal to close connection
- go s.waitForClose(closeConn, connID)
- }
-}
-
-func (s *SocketServer) waitForClose(closeConn chan error, connID int) {
- err := <-closeConn
- switch {
- case err == io.EOF:
- s.Logger.Error("Connection was closed by client")
- case err != nil:
- s.Logger.Error("Connection error", "err", err)
- default:
- // never happens
- s.Logger.Error("Connection was closed")
- }
-
- // Close the connection
- if err := s.rmConn(connID); err != nil {
- s.Logger.Error("Error closing connection", "err", err)
+ go s.handleResponses(cctx, closer, conn, responses)
}
}
// Read requests from conn and deal with them
-func (s *SocketServer) handleRequests(closeConn chan error, conn io.Reader, responses chan<- *types.Response) {
- var count int
+func (s *SocketServer) handleRequests(ctx context.Context, closer func(error), conn io.Reader, responses chan<- *types.Response) {
var bufReader = bufio.NewReader(conn)
defer func() {
// make sure to recover from any app-related panics to allow proper socket cleanup
- r := recover()
- if r != nil {
+ if r := recover(); r != nil {
const size = 64 << 10
buf := make([]byte, size)
buf = buf[:runtime.Stack(buf, false)]
- err := fmt.Errorf("recovered from panic: %v\n%s", r, buf)
- if !s.isLoggerSet {
- fmt.Fprintln(os.Stderr, err)
- }
- closeConn <- err
- s.appMtx.Unlock()
+ closer(fmt.Errorf("recovered from panic: %v\n%s", r, buf))
}
}()
for {
+ req := &types.Request{}
+ if err := types.ReadMessage(bufReader, req); err != nil {
+ closer(fmt.Errorf("error reading message: %w", err))
+ return
+ }
- var req = &types.Request{}
- err := types.ReadMessage(bufReader, req)
+ resp, err := s.processRequest(ctx, req)
if err != nil {
- if err == io.EOF {
- closeConn <- err
- } else {
- closeConn <- fmt.Errorf("error reading message: %w", err)
- }
+ closer(err)
return
}
- s.appMtx.Lock()
- count++
- s.handleRequest(req, responses)
- s.appMtx.Unlock()
+
+ select {
+ case <-ctx.Done():
+ closer(ctx.Err())
+ return
+ case responses <- resp:
+ }
}
}
-func (s *SocketServer) handleRequest(req *types.Request, responses chan<- *types.Response) {
+func (s *SocketServer) processRequest(ctx context.Context, req *types.Request) (*types.Response, error) {
switch r := req.Value.(type) {
case *types.Request_Echo:
- responses <- types.ToResponseEcho(r.Echo.Message)
+ return types.ToResponseEcho(r.Echo.Message), nil
case *types.Request_Flush:
- responses <- types.ToResponseFlush()
+ return types.ToResponseFlush(), nil
case *types.Request_Info:
- res := s.app.Info(*r.Info)
- responses <- types.ToResponseInfo(res)
- case *types.Request_DeliverTx:
- res := s.app.DeliverTx(*r.DeliverTx)
- responses <- types.ToResponseDeliverTx(res)
+ res, err := s.app.Info(ctx, r.Info)
+ if err != nil {
+ return nil, err
+ }
+
+ return types.ToResponseInfo(res), nil
case *types.Request_CheckTx:
- res := s.app.CheckTx(*r.CheckTx)
- responses <- types.ToResponseCheckTx(res)
+ res, err := s.app.CheckTx(ctx, r.CheckTx)
+ if err != nil {
+ return nil, err
+ }
+ return types.ToResponseCheckTx(res), nil
case *types.Request_Commit:
- res := s.app.Commit()
- responses <- types.ToResponseCommit(res)
+ res, err := s.app.Commit(ctx)
+ if err != nil {
+ return nil, err
+ }
+ return types.ToResponseCommit(res), nil
case *types.Request_Query:
- res := s.app.Query(*r.Query)
- responses <- types.ToResponseQuery(res)
+ res, err := s.app.Query(ctx, r.Query)
+ if err != nil {
+ return nil, err
+ }
+ return types.ToResponseQuery(res), nil
case *types.Request_InitChain:
- res := s.app.InitChain(*r.InitChain)
- responses <- types.ToResponseInitChain(res)
- case *types.Request_BeginBlock:
- res := s.app.BeginBlock(*r.BeginBlock)
- responses <- types.ToResponseBeginBlock(res)
- case *types.Request_EndBlock:
- res := s.app.EndBlock(*r.EndBlock)
- responses <- types.ToResponseEndBlock(res)
+ res, err := s.app.InitChain(ctx, r.InitChain)
+ if err != nil {
+ return nil, err
+ }
+ return types.ToResponseInitChain(res), nil
case *types.Request_ListSnapshots:
- res := s.app.ListSnapshots(*r.ListSnapshots)
- responses <- types.ToResponseListSnapshots(res)
+ res, err := s.app.ListSnapshots(ctx, r.ListSnapshots)
+ if err != nil {
+ return nil, err
+ }
+ return types.ToResponseListSnapshots(res), nil
case *types.Request_OfferSnapshot:
- res := s.app.OfferSnapshot(*r.OfferSnapshot)
- responses <- types.ToResponseOfferSnapshot(res)
+ res, err := s.app.OfferSnapshot(ctx, r.OfferSnapshot)
+ if err != nil {
+ return nil, err
+ }
+ return types.ToResponseOfferSnapshot(res), nil
+ case *types.Request_PrepareProposal:
+ res, err := s.app.PrepareProposal(ctx, r.PrepareProposal)
+ if err != nil {
+ return nil, err
+ }
+ return types.ToResponsePrepareProposal(res), nil
+ case *types.Request_ProcessProposal:
+ res, err := s.app.ProcessProposal(ctx, r.ProcessProposal)
+ if err != nil {
+ return nil, err
+ }
+ return types.ToResponseProcessProposal(res), nil
case *types.Request_LoadSnapshotChunk:
- res := s.app.LoadSnapshotChunk(*r.LoadSnapshotChunk)
- responses <- types.ToResponseLoadSnapshotChunk(res)
+ res, err := s.app.LoadSnapshotChunk(ctx, r.LoadSnapshotChunk)
+ if err != nil {
+ return nil, err
+ }
+ return types.ToResponseLoadSnapshotChunk(res), nil
case *types.Request_ApplySnapshotChunk:
- res := s.app.ApplySnapshotChunk(*r.ApplySnapshotChunk)
- responses <- types.ToResponseApplySnapshotChunk(res)
+ res, err := s.app.ApplySnapshotChunk(ctx, r.ApplySnapshotChunk)
+ if err != nil {
+ return nil, err
+ }
+ return types.ToResponseApplySnapshotChunk(res), nil
+ case *types.Request_ExtendVote:
+ res, err := s.app.ExtendVote(ctx, r.ExtendVote)
+ if err != nil {
+ return nil, err
+ }
+ return types.ToResponseExtendVote(res), nil
+ case *types.Request_VerifyVoteExtension:
+ res, err := s.app.VerifyVoteExtension(ctx, r.VerifyVoteExtension)
+ if err != nil {
+ return nil, err
+ }
+ return types.ToResponseVerifyVoteExtension(res), nil
+ case *types.Request_FinalizeBlock:
+ res, err := s.app.FinalizeBlock(ctx, r.FinalizeBlock)
+ if err != nil {
+ return nil, err
+ }
+ return types.ToResponseFinalizeBlock(res), nil
default:
- responses <- types.ToResponseException("Unknown request")
+ return types.ToResponseException("Unknown request"), errors.New("unknown request type")
}
}
// Pull responses from 'responses' and write them to conn.
-func (s *SocketServer) handleResponses(closeConn chan error, conn io.Writer, responses <-chan *types.Response) {
+func (s *SocketServer) handleResponses(
+ ctx context.Context,
+ closer func(error),
+ conn io.Writer,
+ responses <-chan *types.Response,
+) {
bw := bufio.NewWriter(conn)
- for res := range responses {
- if err := types.WriteMessage(res, bw); err != nil {
- closeConn <- fmt.Errorf("error writing message: %w", err)
- return
- }
- if err := bw.Flush(); err != nil {
- closeConn <- fmt.Errorf("error flushing write buffer: %w", err)
+ for {
+ select {
+ case <-ctx.Done():
+ closer(ctx.Err())
return
+ case res := <-responses:
+ if err := types.WriteMessage(res, bw); err != nil {
+ closer(fmt.Errorf("error writing message: %w", err))
+ return
+ }
+ if err := bw.Flush(); err != nil {
+ closer(fmt.Errorf("error writing message: %w", err))
+ return
+ }
}
}
}
diff --git a/abci/tests/client_server_test.go b/abci/tests/client_server_test.go
index 62dc6e07e4..a97c0c7c4c 100644
--- a/abci/tests/client_server_test.go
+++ b/abci/tests/client_server_test.go
@@ -1,27 +1,40 @@
package tests
import (
+ "context"
"testing"
+ "github.com/fortytw2/leaktest"
"github.com/stretchr/testify/assert"
abciclientent "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/example/kvstore"
abciserver "github.com/tendermint/tendermint/abci/server"
+ "github.com/tendermint/tendermint/libs/log"
)
func TestClientServerNoAddrPrefix(t *testing.T) {
- addr := "localhost:26658"
- transport := "socket"
+ t.Cleanup(leaktest.Check(t))
+
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+
+ const (
+ addr = "localhost:26658"
+ transport = "socket"
+ )
app := kvstore.NewApplication()
+ logger := log.NewTestingLogger(t)
- server, err := abciserver.NewServer(addr, transport, app)
+ server, err := abciserver.NewServer(logger, addr, transport, app)
assert.NoError(t, err, "expected no error on NewServer")
- err = server.Start()
+ err = server.Start(ctx)
assert.NoError(t, err, "expected no error on server.Start")
+ t.Cleanup(server.Wait)
- client, err := abciclientent.NewClient(addr, transport, true)
+ client, err := abciclientent.NewClient(logger, addr, transport, true)
assert.NoError(t, err, "expected no error on NewClient")
- err = client.Start()
+ err = client.Start(ctx)
assert.NoError(t, err, "expected no error on client.Start")
+ t.Cleanup(client.Wait)
}
diff --git a/abci/tests/server/client.go b/abci/tests/server/client.go
index 55af386861..ed20d3cb07 100644
--- a/abci/tests/server/client.go
+++ b/abci/tests/server/client.go
@@ -6,15 +6,13 @@ import (
"errors"
"fmt"
- abcicli "github.com/tendermint/tendermint/abci/client"
+ abciclient "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/dash/llmq"
)
-var ctx = context.Background()
-
-func InitChain(client abcicli.Client) error {
+func InitChain(ctx context.Context, client abciclient.Client) error {
const total = 10
ld, err := llmq.Generate(crypto.RandProTxHashes(total))
if err != nil {
@@ -24,7 +22,7 @@ func InitChain(client abcicli.Client) error {
if err != nil {
return err
}
- _, err = client.InitChainSync(context.Background(), types.RequestInitChain{
+ _, err = client.InitChain(ctx, &types.RequestInitChain{
ValidatorSet: validatorSet,
})
if err != nil {
@@ -35,8 +33,8 @@ func InitChain(client abcicli.Client) error {
return nil
}
-func Commit(client abcicli.Client, hashExp []byte) error {
- res, err := client.CommitSync(ctx)
+func Commit(ctx context.Context, client abciclient.Client, hashExp []byte) error {
+ res, err := client.Commit(ctx)
data := res.Data
if err != nil {
fmt.Println("Failed test: Commit")
@@ -52,27 +50,29 @@ func Commit(client abcicli.Client, hashExp []byte) error {
return nil
}
-func DeliverTx(client abcicli.Client, txBytes []byte, codeExp uint32, dataExp []byte) error {
- res, _ := client.DeliverTxSync(ctx, types.RequestDeliverTx{Tx: txBytes})
- code, data, log := res.Code, res.Data, res.Log
- if code != codeExp {
- fmt.Println("Failed test: DeliverTx")
- fmt.Printf("DeliverTx response code was unexpected. Got %v expected %v. Log: %v\n",
- code, codeExp, log)
- return errors.New("deliverTx error")
- }
- if !bytes.Equal(data, dataExp) {
- fmt.Println("Failed test: DeliverTx")
- fmt.Printf("DeliverTx response data was unexpected. Got %X expected %X\n",
- data, dataExp)
- return errors.New("deliverTx error")
+func FinalizeBlock(ctx context.Context, client abciclient.Client, txBytes [][]byte, codeExp []uint32, dataExp []byte) error {
+ res, _ := client.FinalizeBlock(ctx, &types.RequestFinalizeBlock{Txs: txBytes})
+ for i, tx := range res.TxResults {
+ code, data, log := tx.Code, tx.Data, tx.Log
+ if code != codeExp[i] {
+ fmt.Println("Failed test: FinalizeBlock")
+ fmt.Printf("FinalizeBlock response code was unexpected. Got %v expected %v. Log: %v\n",
+ code, codeExp, log)
+ return errors.New("FinalizeBlock error")
+ }
+ if !bytes.Equal(data, dataExp) {
+ fmt.Println("Failed test: FinalizeBlock")
+ fmt.Printf("FinalizeBlock response data was unexpected. Got %X expected %X\n",
+ data, dataExp)
+ return errors.New("FinalizeBlock error")
+ }
}
- fmt.Println("Passed test: DeliverTx")
+ fmt.Println("Passed test: FinalizeBlock")
return nil
}
-func CheckTx(client abcicli.Client, txBytes []byte, codeExp uint32, dataExp []byte) error {
- res, _ := client.CheckTxSync(ctx, types.RequestCheckTx{Tx: txBytes})
+func CheckTx(ctx context.Context, client abciclient.Client, txBytes []byte, codeExp uint32, dataExp []byte) error {
+ res, _ := client.CheckTx(ctx, &types.RequestCheckTx{Tx: txBytes})
code, data, log := res.Code, res.Data, res.Log
if code != codeExp {
fmt.Println("Failed test: CheckTx")
diff --git a/abci/tests/test_cli/ex1.abci b/abci/tests/test_cli/ex1.abci
index e909266ecf..09457189ed 100644
--- a/abci/tests/test_cli/ex1.abci
+++ b/abci/tests/test_cli/ex1.abci
@@ -1,10 +1,10 @@
echo hello
info
commit
-deliver_tx "abc"
+finalize_block "abc"
info
commit
query "abc"
-deliver_tx "def=xyz"
+finalize_block "def=xyz" "ghi=123"
commit
query "def"
diff --git a/abci/tests/test_cli/ex1.abci.out b/abci/tests/test_cli/ex1.abci.out
index 735e4bea2e..01d0150f0f 100644
--- a/abci/tests/test_cli/ex1.abci.out
+++ b/abci/tests/test_cli/ex1.abci.out
@@ -3,24 +3,24 @@
-> data: hello
-> data.hex: 0x68656C6C6F
-> info
+> info
-> code: OK
-> data: {"size":0}
-> data.hex: 0x7B2273697A65223A307D
-> commit
+> commit
-> code: OK
-> data.hex: 0x0000000000000000000000000000000000000000000000000000000000000000
-> deliver_tx "abc"
+> finalize_block "abc"
-> code: OK
-> info
+> info
-> code: OK
-> data: {"size":1}
-> data.hex: 0x7B2273697A65223A317D
-> commit
+> commit
-> code: OK
-> data.hex: 0x0200000000000000000000000000000000000000000000000000000000000000
@@ -33,12 +33,14 @@
-> value: abc
-> value.hex: 616263
-> deliver_tx "def=xyz"
+> finalize_block "def=xyz" "ghi=123"
+-> code: OK
+> finalize_block "def=xyz" "ghi=123"
-> code: OK
-> commit
+> commit
-> code: OK
--> data.hex: 0x0400000000000000000000000000000000000000000000000000000000000000
+-> data.hex: 0x0600000000000000000000000000000000000000000000000000000000000000
> query "def"
-> code: OK
diff --git a/abci/tests/test_cli/ex2.abci b/abci/tests/test_cli/ex2.abci
index 965ca842c7..90e99c2f90 100644
--- a/abci/tests/test_cli/ex2.abci
+++ b/abci/tests/test_cli/ex2.abci
@@ -1,7 +1,7 @@
check_tx 0x00
check_tx 0xff
-deliver_tx 0x00
+finalize_block 0x00
check_tx 0x00
-deliver_tx 0x01
-deliver_tx 0x04
+finalize_block 0x01
+finalize_block 0x04
info
diff --git a/abci/tests/test_cli/ex2.abci.out b/abci/tests/test_cli/ex2.abci.out
index 7ef8abbc45..aab0b1966f 100644
--- a/abci/tests/test_cli/ex2.abci.out
+++ b/abci/tests/test_cli/ex2.abci.out
@@ -4,20 +4,20 @@
> check_tx 0xff
-> code: OK
-> deliver_tx 0x00
+> finalize_block 0x00
-> code: OK
> check_tx 0x00
-> code: OK
-> deliver_tx 0x01
+> finalize_block 0x01
-> code: OK
-> deliver_tx 0x04
+> finalize_block 0x04
-> code: OK
> info
-> code: OK
--> data: {"hashes":0,"txs":3}
--> data.hex: 0x7B22686173686573223A302C22747873223A337D
+-> data: {"size":3}
+-> data.hex: 0x7B2273697A65223A337D
diff --git a/abci/types/application.go b/abci/types/application.go
index 2a3cabd8bb..e74b877438 100644
--- a/abci/types/application.go
+++ b/abci/types/application.go
@@ -1,33 +1,36 @@
package types
-import (
- "context"
-)
+import "context"
+//go:generate ../../scripts/mockery_generate.sh Application
// Application is an interface that enables any finite, deterministic state machine
// to be driven by a blockchain-based replication engine via the ABCI.
-// All methods take a RequestXxx argument and return a ResponseXxx argument,
-// except CheckTx/DeliverTx, which take `tx []byte`, and `Commit`, which takes nothing.
type Application interface {
// Info/Query Connection
- Info(RequestInfo) ResponseInfo // Return application info
- Query(RequestQuery) ResponseQuery // Query for state
+ Info(context.Context, *RequestInfo) (*ResponseInfo, error) // Return application info
+ Query(context.Context, *RequestQuery) (*ResponseQuery, error) // Query for state
// Mempool Connection
- CheckTx(RequestCheckTx) ResponseCheckTx // Validate a tx for the mempool
+ CheckTx(context.Context, *RequestCheckTx) (*ResponseCheckTx, error) // Validate a tx for the mempool
// Consensus Connection
- InitChain(RequestInitChain) ResponseInitChain // Initialize blockchain w validators/other info from TendermintCore
- BeginBlock(RequestBeginBlock) ResponseBeginBlock // Signals the beginning of a block
- DeliverTx(RequestDeliverTx) ResponseDeliverTx // Deliver a tx for full processing
- EndBlock(RequestEndBlock) ResponseEndBlock // Signals the end of a block, returns changes to the validator set
- Commit() ResponseCommit // Commit the state and return the application Merkle root hash
+ InitChain(context.Context, *RequestInitChain) (*ResponseInitChain, error) // Initialize blockchain w validators/other info from TendermintCore
+ PrepareProposal(context.Context, *RequestPrepareProposal) (*ResponsePrepareProposal, error)
+ ProcessProposal(context.Context, *RequestProcessProposal) (*ResponseProcessProposal, error)
+ // Commit the state and return the application Merkle root hash
+ Commit(context.Context) (*ResponseCommit, error)
+ // Create application specific vote extension
+ ExtendVote(context.Context, *RequestExtendVote) (*ResponseExtendVote, error)
+ // Verify application's vote extension data
+ VerifyVoteExtension(context.Context, *RequestVerifyVoteExtension) (*ResponseVerifyVoteExtension, error)
+ // Deliver the decided block with its txs to the Application
+ FinalizeBlock(context.Context, *RequestFinalizeBlock) (*ResponseFinalizeBlock, error)
// State Sync Connection
- ListSnapshots(RequestListSnapshots) ResponseListSnapshots // List available snapshots
- OfferSnapshot(RequestOfferSnapshot) ResponseOfferSnapshot // Offer a snapshot to the application
- LoadSnapshotChunk(RequestLoadSnapshotChunk) ResponseLoadSnapshotChunk // Load a snapshot chunk
- ApplySnapshotChunk(RequestApplySnapshotChunk) ResponseApplySnapshotChunk // Apply a shapshot chunk
+ ListSnapshots(context.Context, *RequestListSnapshots) (*ResponseListSnapshots, error) // List available snapshots
+ OfferSnapshot(context.Context, *RequestOfferSnapshot) (*ResponseOfferSnapshot, error) // Offer a snapshot to the application
+ LoadSnapshotChunk(context.Context, *RequestLoadSnapshotChunk) (*ResponseLoadSnapshotChunk, error) // Load a snapshot chunk
+ ApplySnapshotChunk(context.Context, *RequestApplySnapshotChunk) (*ResponseApplySnapshotChunk, error) // Apply a shapshot chunk
}
//-------------------------------------------------------
@@ -35,140 +38,84 @@ type Application interface {
var _ Application = (*BaseApplication)(nil)
-type BaseApplication struct {
-}
+type BaseApplication struct{}
func NewBaseApplication() *BaseApplication {
return &BaseApplication{}
}
-func (BaseApplication) Info(req RequestInfo) ResponseInfo {
- return ResponseInfo{}
-}
-
-func (BaseApplication) DeliverTx(req RequestDeliverTx) ResponseDeliverTx {
- return ResponseDeliverTx{Code: CodeTypeOK}
-}
-
-func (BaseApplication) CheckTx(req RequestCheckTx) ResponseCheckTx {
- return ResponseCheckTx{Code: CodeTypeOK}
-}
-
-func (BaseApplication) Commit() ResponseCommit {
- return ResponseCommit{}
-}
-
-func (BaseApplication) Query(req RequestQuery) ResponseQuery {
- return ResponseQuery{Code: CodeTypeOK}
-}
-
-func (BaseApplication) InitChain(req RequestInitChain) ResponseInitChain {
- return ResponseInitChain{}
-}
-
-func (BaseApplication) BeginBlock(req RequestBeginBlock) ResponseBeginBlock {
- return ResponseBeginBlock{}
-}
-
-func (BaseApplication) EndBlock(req RequestEndBlock) ResponseEndBlock {
- return ResponseEndBlock{}
-}
-
-func (BaseApplication) ListSnapshots(req RequestListSnapshots) ResponseListSnapshots {
- return ResponseListSnapshots{}
-}
-
-func (BaseApplication) OfferSnapshot(req RequestOfferSnapshot) ResponseOfferSnapshot {
- return ResponseOfferSnapshot{}
-}
-
-func (BaseApplication) LoadSnapshotChunk(req RequestLoadSnapshotChunk) ResponseLoadSnapshotChunk {
- return ResponseLoadSnapshotChunk{}
-}
-
-func (BaseApplication) ApplySnapshotChunk(req RequestApplySnapshotChunk) ResponseApplySnapshotChunk {
- return ResponseApplySnapshotChunk{}
-}
-
-//-------------------------------------------------------
-
-// GRPCApplication is a GRPC wrapper for Application
-type GRPCApplication struct {
- app Application
-}
-
-func NewGRPCApplication(app Application) *GRPCApplication {
- return &GRPCApplication{app}
-}
-
-func (app *GRPCApplication) Echo(ctx context.Context, req *RequestEcho) (*ResponseEcho, error) {
- return &ResponseEcho{Message: req.Message}, nil
+func (BaseApplication) Info(_ context.Context, req *RequestInfo) (*ResponseInfo, error) {
+ return &ResponseInfo{}, nil
}
-func (app *GRPCApplication) Flush(ctx context.Context, req *RequestFlush) (*ResponseFlush, error) {
- return &ResponseFlush{}, nil
+func (BaseApplication) CheckTx(_ context.Context, req *RequestCheckTx) (*ResponseCheckTx, error) {
+ return &ResponseCheckTx{Code: CodeTypeOK}, nil
}
-func (app *GRPCApplication) Info(ctx context.Context, req *RequestInfo) (*ResponseInfo, error) {
- res := app.app.Info(*req)
- return &res, nil
+func (BaseApplication) Commit(_ context.Context) (*ResponseCommit, error) {
+ return &ResponseCommit{}, nil
}
-func (app *GRPCApplication) DeliverTx(ctx context.Context, req *RequestDeliverTx) (*ResponseDeliverTx, error) {
- res := app.app.DeliverTx(*req)
- return &res, nil
+func (BaseApplication) ExtendVote(_ context.Context, req *RequestExtendVote) (*ResponseExtendVote, error) {
+ return &ResponseExtendVote{}, nil
}
-func (app *GRPCApplication) CheckTx(ctx context.Context, req *RequestCheckTx) (*ResponseCheckTx, error) {
- res := app.app.CheckTx(*req)
- return &res, nil
+func (BaseApplication) VerifyVoteExtension(_ context.Context, req *RequestVerifyVoteExtension) (*ResponseVerifyVoteExtension, error) {
+ return &ResponseVerifyVoteExtension{
+ Status: ResponseVerifyVoteExtension_ACCEPT,
+ }, nil
}
-func (app *GRPCApplication) Query(ctx context.Context, req *RequestQuery) (*ResponseQuery, error) {
- res := app.app.Query(*req)
- return &res, nil
+func (BaseApplication) Query(_ context.Context, req *RequestQuery) (*ResponseQuery, error) {
+ return &ResponseQuery{Code: CodeTypeOK}, nil
}
-func (app *GRPCApplication) Commit(ctx context.Context, req *RequestCommit) (*ResponseCommit, error) {
- res := app.app.Commit()
- return &res, nil
+func (BaseApplication) InitChain(_ context.Context, req *RequestInitChain) (*ResponseInitChain, error) {
+ return &ResponseInitChain{}, nil
}
-func (app *GRPCApplication) InitChain(ctx context.Context, req *RequestInitChain) (*ResponseInitChain, error) {
- res := app.app.InitChain(*req)
- return &res, nil
+func (BaseApplication) ListSnapshots(_ context.Context, req *RequestListSnapshots) (*ResponseListSnapshots, error) {
+ return &ResponseListSnapshots{}, nil
}
-func (app *GRPCApplication) BeginBlock(ctx context.Context, req *RequestBeginBlock) (*ResponseBeginBlock, error) {
- res := app.app.BeginBlock(*req)
- return &res, nil
+func (BaseApplication) OfferSnapshot(_ context.Context, req *RequestOfferSnapshot) (*ResponseOfferSnapshot, error) {
+ return &ResponseOfferSnapshot{}, nil
}
-func (app *GRPCApplication) EndBlock(ctx context.Context, req *RequestEndBlock) (*ResponseEndBlock, error) {
- res := app.app.EndBlock(*req)
- return &res, nil
+func (BaseApplication) LoadSnapshotChunk(_ context.Context, _ *RequestLoadSnapshotChunk) (*ResponseLoadSnapshotChunk, error) {
+ return &ResponseLoadSnapshotChunk{}, nil
}
-func (app *GRPCApplication) ListSnapshots(
- ctx context.Context, req *RequestListSnapshots) (*ResponseListSnapshots, error) {
- res := app.app.ListSnapshots(*req)
- return &res, nil
+func (BaseApplication) ApplySnapshotChunk(_ context.Context, req *RequestApplySnapshotChunk) (*ResponseApplySnapshotChunk, error) {
+ return &ResponseApplySnapshotChunk{}, nil
}
-func (app *GRPCApplication) OfferSnapshot(
- ctx context.Context, req *RequestOfferSnapshot) (*ResponseOfferSnapshot, error) {
- res := app.app.OfferSnapshot(*req)
- return &res, nil
+func (BaseApplication) PrepareProposal(_ context.Context, req *RequestPrepareProposal) (*ResponsePrepareProposal, error) {
+ trs := make([]*TxRecord, 0, len(req.Txs))
+ var totalBytes int64
+ for _, tx := range req.Txs {
+ totalBytes += int64(len(tx))
+ if totalBytes > req.MaxTxBytes {
+ break
+ }
+ trs = append(trs, &TxRecord{
+ Action: TxRecord_UNMODIFIED,
+ Tx: tx,
+ })
+ }
+ return &ResponsePrepareProposal{TxRecords: trs}, nil
}
-func (app *GRPCApplication) LoadSnapshotChunk(
- ctx context.Context, req *RequestLoadSnapshotChunk) (*ResponseLoadSnapshotChunk, error) {
- res := app.app.LoadSnapshotChunk(*req)
- return &res, nil
+func (BaseApplication) ProcessProposal(_ context.Context, req *RequestProcessProposal) (*ResponseProcessProposal, error) {
+ return &ResponseProcessProposal{Status: ResponseProcessProposal_ACCEPT}, nil
}
-func (app *GRPCApplication) ApplySnapshotChunk(
- ctx context.Context, req *RequestApplySnapshotChunk) (*ResponseApplySnapshotChunk, error) {
- res := app.app.ApplySnapshotChunk(*req)
- return &res, nil
+func (BaseApplication) FinalizeBlock(_ context.Context, req *RequestFinalizeBlock) (*ResponseFinalizeBlock, error) {
+ txs := make([]*ExecTxResult, len(req.Txs))
+ for i := range req.Txs {
+ txs[i] = &ExecTxResult{Code: CodeTypeOK}
+ }
+ return &ResponseFinalizeBlock{
+ TxResults: txs,
+ }, nil
}
diff --git a/abci/types/client.go b/abci/types/client.go
deleted file mode 100644
index ab1254f4c2..0000000000
--- a/abci/types/client.go
+++ /dev/null
@@ -1 +0,0 @@
-package types
diff --git a/abci/types/messages.go b/abci/types/messages.go
index 74f3cc75c8..80ab195259 100644
--- a/abci/types/messages.go
+++ b/abci/types/messages.go
@@ -4,6 +4,7 @@ import (
"io"
"github.com/gogo/protobuf/proto"
+
"github.com/tendermint/tendermint/internal/libs/protoio"
)
@@ -38,75 +39,87 @@ func ToRequestFlush() *Request {
}
}
-func ToRequestInfo(req RequestInfo) *Request {
+func ToRequestInfo(req *RequestInfo) *Request {
return &Request{
- Value: &Request_Info{&req},
+ Value: &Request_Info{req},
}
}
-func ToRequestDeliverTx(req RequestDeliverTx) *Request {
+func ToRequestCheckTx(req *RequestCheckTx) *Request {
return &Request{
- Value: &Request_DeliverTx{&req},
+ Value: &Request_CheckTx{req},
}
}
-func ToRequestCheckTx(req RequestCheckTx) *Request {
+func ToRequestCommit() *Request {
return &Request{
- Value: &Request_CheckTx{&req},
+ Value: &Request_Commit{&RequestCommit{}},
}
}
-func ToRequestCommit() *Request {
+func ToRequestQuery(req *RequestQuery) *Request {
return &Request{
- Value: &Request_Commit{&RequestCommit{}},
+ Value: &Request_Query{req},
+ }
+}
+
+func ToRequestInitChain(req *RequestInitChain) *Request {
+ return &Request{
+ Value: &Request_InitChain{req},
+ }
+}
+
+func ToRequestListSnapshots(req *RequestListSnapshots) *Request {
+ return &Request{
+ Value: &Request_ListSnapshots{req},
}
}
-func ToRequestQuery(req RequestQuery) *Request {
+func ToRequestOfferSnapshot(req *RequestOfferSnapshot) *Request {
return &Request{
- Value: &Request_Query{&req},
+ Value: &Request_OfferSnapshot{req},
}
}
-func ToRequestInitChain(req RequestInitChain) *Request {
+func ToRequestLoadSnapshotChunk(req *RequestLoadSnapshotChunk) *Request {
return &Request{
- Value: &Request_InitChain{&req},
+ Value: &Request_LoadSnapshotChunk{req},
}
}
-func ToRequestBeginBlock(req RequestBeginBlock) *Request {
+func ToRequestApplySnapshotChunk(req *RequestApplySnapshotChunk) *Request {
return &Request{
- Value: &Request_BeginBlock{&req},
+ Value: &Request_ApplySnapshotChunk{req},
}
}
-func ToRequestEndBlock(req RequestEndBlock) *Request {
+func ToRequestExtendVote(req *RequestExtendVote) *Request {
return &Request{
- Value: &Request_EndBlock{&req},
+ Value: &Request_ExtendVote{req},
}
}
-func ToRequestListSnapshots(req RequestListSnapshots) *Request {
+func ToRequestVerifyVoteExtension(req *RequestVerifyVoteExtension) *Request {
return &Request{
- Value: &Request_ListSnapshots{&req},
+ Value: &Request_VerifyVoteExtension{req},
}
}
-func ToRequestOfferSnapshot(req RequestOfferSnapshot) *Request {
+func ToRequestPrepareProposal(req *RequestPrepareProposal) *Request {
return &Request{
- Value: &Request_OfferSnapshot{&req},
+ Value: &Request_PrepareProposal{req},
}
}
-func ToRequestLoadSnapshotChunk(req RequestLoadSnapshotChunk) *Request {
+func ToRequestProcessProposal(req *RequestProcessProposal) *Request {
return &Request{
- Value: &Request_LoadSnapshotChunk{&req},
+ Value: &Request_ProcessProposal{req},
}
}
-func ToRequestApplySnapshotChunk(req RequestApplySnapshotChunk) *Request {
+func ToRequestFinalizeBlock(req *RequestFinalizeBlock) *Request {
return &Request{
- Value: &Request_ApplySnapshotChunk{&req},
+ Value: &Request_FinalizeBlock{req},
}
}
@@ -130,73 +143,86 @@ func ToResponseFlush() *Response {
}
}
-func ToResponseInfo(res ResponseInfo) *Response {
+func ToResponseInfo(res *ResponseInfo) *Response {
+ return &Response{
+ Value: &Response_Info{res},
+ }
+}
+
+func ToResponseCheckTx(res *ResponseCheckTx) *Response {
return &Response{
- Value: &Response_Info{&res},
+ Value: &Response_CheckTx{res},
}
}
-func ToResponseDeliverTx(res ResponseDeliverTx) *Response {
+
+func ToResponseCommit(res *ResponseCommit) *Response {
+ return &Response{
+ Value: &Response_Commit{res},
+ }
+}
+
+func ToResponseQuery(res *ResponseQuery) *Response {
return &Response{
- Value: &Response_DeliverTx{&res},
+ Value: &Response_Query{res},
}
}
-func ToResponseCheckTx(res ResponseCheckTx) *Response {
+func ToResponseInitChain(res *ResponseInitChain) *Response {
return &Response{
- Value: &Response_CheckTx{&res},
+ Value: &Response_InitChain{res},
}
}
-func ToResponseCommit(res ResponseCommit) *Response {
+func ToResponseListSnapshots(res *ResponseListSnapshots) *Response {
return &Response{
- Value: &Response_Commit{&res},
+ Value: &Response_ListSnapshots{res},
}
}
-func ToResponseQuery(res ResponseQuery) *Response {
+func ToResponseOfferSnapshot(res *ResponseOfferSnapshot) *Response {
return &Response{
- Value: &Response_Query{&res},
+ Value: &Response_OfferSnapshot{res},
}
}
-func ToResponseInitChain(res ResponseInitChain) *Response {
+func ToResponseLoadSnapshotChunk(res *ResponseLoadSnapshotChunk) *Response {
return &Response{
- Value: &Response_InitChain{&res},
+ Value: &Response_LoadSnapshotChunk{res},
}
}
-func ToResponseBeginBlock(res ResponseBeginBlock) *Response {
+func ToResponseApplySnapshotChunk(res *ResponseApplySnapshotChunk) *Response {
return &Response{
- Value: &Response_BeginBlock{&res},
+ Value: &Response_ApplySnapshotChunk{res},
}
}
-func ToResponseEndBlock(res ResponseEndBlock) *Response {
+func ToResponseExtendVote(res *ResponseExtendVote) *Response {
return &Response{
- Value: &Response_EndBlock{&res},
+ Value: &Response_ExtendVote{res},
}
}
-func ToResponseListSnapshots(res ResponseListSnapshots) *Response {
+func ToResponseVerifyVoteExtension(res *ResponseVerifyVoteExtension) *Response {
return &Response{
- Value: &Response_ListSnapshots{&res},
+ Value: &Response_VerifyVoteExtension{res},
}
}
-func ToResponseOfferSnapshot(res ResponseOfferSnapshot) *Response {
+func ToResponsePrepareProposal(res *ResponsePrepareProposal) *Response {
return &Response{
- Value: &Response_OfferSnapshot{&res},
+ Value: &Response_PrepareProposal{res},
}
}
-func ToResponseLoadSnapshotChunk(res ResponseLoadSnapshotChunk) *Response {
+func ToResponseProcessProposal(res *ResponseProcessProposal) *Response {
return &Response{
- Value: &Response_LoadSnapshotChunk{&res},
+ Value: &Response_ProcessProposal{res},
}
}
-func ToResponseApplySnapshotChunk(res ResponseApplySnapshotChunk) *Response {
+func ToResponseFinalizeBlock(res *ResponseFinalizeBlock) *Response {
return &Response{
- Value: &Response_ApplySnapshotChunk{&res},
+ Value: &Response_FinalizeBlock{res},
}
}
diff --git a/abci/types/messages_test.go b/abci/types/messages_test.go
index 491d10c7f8..4f17f9f83c 100644
--- a/abci/types/messages_test.go
+++ b/abci/types/messages_test.go
@@ -13,8 +13,8 @@ import (
)
func TestMarshalJSON(t *testing.T) {
- b, err := json.Marshal(&ResponseDeliverTx{})
- assert.Nil(t, err)
+ b, err := json.Marshal(&ExecTxResult{Code: 1})
+ assert.NoError(t, err)
// include empty fields.
assert.True(t, strings.Contains(string(b), "code"))
r1 := ResponseCheckTx{
@@ -31,11 +31,11 @@ func TestMarshalJSON(t *testing.T) {
},
}
b, err = json.Marshal(&r1)
- assert.Nil(t, err)
+ assert.NoError(t, err)
var r2 ResponseCheckTx
err = json.Unmarshal(b, &r2)
- assert.Nil(t, err)
+ assert.NoError(t, err)
assert.Equal(t, r1, r2)
}
@@ -49,11 +49,11 @@ func TestWriteReadMessageSimple(t *testing.T) {
for _, c := range cases {
buf := new(bytes.Buffer)
err := WriteMessage(c, buf)
- assert.Nil(t, err)
+ assert.NoError(t, err)
msg := new(RequestEcho)
err = ReadMessage(buf, msg)
- assert.Nil(t, err)
+ assert.NoError(t, err)
assert.True(t, proto.Equal(c, msg))
}
@@ -71,11 +71,11 @@ func TestWriteReadMessage(t *testing.T) {
for _, c := range cases {
buf := new(bytes.Buffer)
err := WriteMessage(c, buf)
- assert.Nil(t, err)
+ assert.NoError(t, err)
msg := new(tmproto.Header)
err = ReadMessage(buf, msg)
- assert.Nil(t, err)
+ assert.NoError(t, err)
assert.True(t, proto.Equal(c, msg))
}
@@ -103,11 +103,11 @@ func TestWriteReadMessage2(t *testing.T) {
for _, c := range cases {
buf := new(bytes.Buffer)
err := WriteMessage(c, buf)
- assert.Nil(t, err)
+ assert.NoError(t, err)
msg := new(ResponseCheckTx)
err = ReadMessage(buf, msg)
- assert.Nil(t, err)
+ assert.NoError(t, err)
assert.True(t, proto.Equal(c, msg))
}
diff --git a/abci/types/mocks/application.go b/abci/types/mocks/application.go
new file mode 100644
index 0000000000..2d35c481f0
--- /dev/null
+++ b/abci/types/mocks/application.go
@@ -0,0 +1,349 @@
+// Code generated by mockery. DO NOT EDIT.
+
+package mocks
+
+import (
+ context "context"
+ testing "testing"
+
+ mock "github.com/stretchr/testify/mock"
+
+ types "github.com/tendermint/tendermint/abci/types"
+)
+
+// Application is an autogenerated mock type for the Application type
+type Application struct {
+ mock.Mock
+}
+
+// ApplySnapshotChunk provides a mock function with given fields: _a0, _a1
+func (_m *Application) ApplySnapshotChunk(_a0 context.Context, _a1 *types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
+ ret := _m.Called(_a0, _a1)
+
+ var r0 *types.ResponseApplySnapshotChunk
+ if rf, ok := ret.Get(0).(func(context.Context, *types.RequestApplySnapshotChunk) *types.ResponseApplySnapshotChunk); ok {
+ r0 = rf(_a0, _a1)
+ } else {
+ if ret.Get(0) != nil {
+ r0 = ret.Get(0).(*types.ResponseApplySnapshotChunk)
+ }
+ }
+
+ var r1 error
+ if rf, ok := ret.Get(1).(func(context.Context, *types.RequestApplySnapshotChunk) error); ok {
+ r1 = rf(_a0, _a1)
+ } else {
+ r1 = ret.Error(1)
+ }
+
+ return r0, r1
+}
+
+// CheckTx provides a mock function with given fields: _a0, _a1
+func (_m *Application) CheckTx(_a0 context.Context, _a1 *types.RequestCheckTx) (*types.ResponseCheckTx, error) {
+ ret := _m.Called(_a0, _a1)
+
+ var r0 *types.ResponseCheckTx
+ if rf, ok := ret.Get(0).(func(context.Context, *types.RequestCheckTx) *types.ResponseCheckTx); ok {
+ r0 = rf(_a0, _a1)
+ } else {
+ if ret.Get(0) != nil {
+ r0 = ret.Get(0).(*types.ResponseCheckTx)
+ }
+ }
+
+ var r1 error
+ if rf, ok := ret.Get(1).(func(context.Context, *types.RequestCheckTx) error); ok {
+ r1 = rf(_a0, _a1)
+ } else {
+ r1 = ret.Error(1)
+ }
+
+ return r0, r1
+}
+
+// Commit provides a mock function with given fields: _a0
+func (_m *Application) Commit(_a0 context.Context) (*types.ResponseCommit, error) {
+ ret := _m.Called(_a0)
+
+ var r0 *types.ResponseCommit
+ if rf, ok := ret.Get(0).(func(context.Context) *types.ResponseCommit); ok {
+ r0 = rf(_a0)
+ } else {
+ if ret.Get(0) != nil {
+ r0 = ret.Get(0).(*types.ResponseCommit)
+ }
+ }
+
+ var r1 error
+ if rf, ok := ret.Get(1).(func(context.Context) error); ok {
+ r1 = rf(_a0)
+ } else {
+ r1 = ret.Error(1)
+ }
+
+ return r0, r1
+}
+
+// ExtendVote provides a mock function with given fields: _a0, _a1
+func (_m *Application) ExtendVote(_a0 context.Context, _a1 *types.RequestExtendVote) (*types.ResponseExtendVote, error) {
+ ret := _m.Called(_a0, _a1)
+
+ var r0 *types.ResponseExtendVote
+ if rf, ok := ret.Get(0).(func(context.Context, *types.RequestExtendVote) *types.ResponseExtendVote); ok {
+ r0 = rf(_a0, _a1)
+ } else {
+ if ret.Get(0) != nil {
+ r0 = ret.Get(0).(*types.ResponseExtendVote)
+ }
+ }
+
+ var r1 error
+ if rf, ok := ret.Get(1).(func(context.Context, *types.RequestExtendVote) error); ok {
+ r1 = rf(_a0, _a1)
+ } else {
+ r1 = ret.Error(1)
+ }
+
+ return r0, r1
+}
+
+// FinalizeBlock provides a mock function with given fields: _a0, _a1
+func (_m *Application) FinalizeBlock(_a0 context.Context, _a1 *types.RequestFinalizeBlock) (*types.ResponseFinalizeBlock, error) {
+ ret := _m.Called(_a0, _a1)
+
+ var r0 *types.ResponseFinalizeBlock
+ if rf, ok := ret.Get(0).(func(context.Context, *types.RequestFinalizeBlock) *types.ResponseFinalizeBlock); ok {
+ r0 = rf(_a0, _a1)
+ } else {
+ if ret.Get(0) != nil {
+ r0 = ret.Get(0).(*types.ResponseFinalizeBlock)
+ }
+ }
+
+ var r1 error
+ if rf, ok := ret.Get(1).(func(context.Context, *types.RequestFinalizeBlock) error); ok {
+ r1 = rf(_a0, _a1)
+ } else {
+ r1 = ret.Error(1)
+ }
+
+ return r0, r1
+}
+
+// Info provides a mock function with given fields: _a0, _a1
+func (_m *Application) Info(_a0 context.Context, _a1 *types.RequestInfo) (*types.ResponseInfo, error) {
+ ret := _m.Called(_a0, _a1)
+
+ var r0 *types.ResponseInfo
+ if rf, ok := ret.Get(0).(func(context.Context, *types.RequestInfo) *types.ResponseInfo); ok {
+ r0 = rf(_a0, _a1)
+ } else {
+ if ret.Get(0) != nil {
+ r0 = ret.Get(0).(*types.ResponseInfo)
+ }
+ }
+
+ var r1 error
+ if rf, ok := ret.Get(1).(func(context.Context, *types.RequestInfo) error); ok {
+ r1 = rf(_a0, _a1)
+ } else {
+ r1 = ret.Error(1)
+ }
+
+ return r0, r1
+}
+
+// InitChain provides a mock function with given fields: _a0, _a1
+func (_m *Application) InitChain(_a0 context.Context, _a1 *types.RequestInitChain) (*types.ResponseInitChain, error) {
+ ret := _m.Called(_a0, _a1)
+
+ var r0 *types.ResponseInitChain
+ if rf, ok := ret.Get(0).(func(context.Context, *types.RequestInitChain) *types.ResponseInitChain); ok {
+ r0 = rf(_a0, _a1)
+ } else {
+ if ret.Get(0) != nil {
+ r0 = ret.Get(0).(*types.ResponseInitChain)
+ }
+ }
+
+ var r1 error
+ if rf, ok := ret.Get(1).(func(context.Context, *types.RequestInitChain) error); ok {
+ r1 = rf(_a0, _a1)
+ } else {
+ r1 = ret.Error(1)
+ }
+
+ return r0, r1
+}
+
+// ListSnapshots provides a mock function with given fields: _a0, _a1
+func (_m *Application) ListSnapshots(_a0 context.Context, _a1 *types.RequestListSnapshots) (*types.ResponseListSnapshots, error) {
+ ret := _m.Called(_a0, _a1)
+
+ var r0 *types.ResponseListSnapshots
+ if rf, ok := ret.Get(0).(func(context.Context, *types.RequestListSnapshots) *types.ResponseListSnapshots); ok {
+ r0 = rf(_a0, _a1)
+ } else {
+ if ret.Get(0) != nil {
+ r0 = ret.Get(0).(*types.ResponseListSnapshots)
+ }
+ }
+
+ var r1 error
+ if rf, ok := ret.Get(1).(func(context.Context, *types.RequestListSnapshots) error); ok {
+ r1 = rf(_a0, _a1)
+ } else {
+ r1 = ret.Error(1)
+ }
+
+ return r0, r1
+}
+
+// LoadSnapshotChunk provides a mock function with given fields: _a0, _a1
+func (_m *Application) LoadSnapshotChunk(_a0 context.Context, _a1 *types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
+ ret := _m.Called(_a0, _a1)
+
+ var r0 *types.ResponseLoadSnapshotChunk
+ if rf, ok := ret.Get(0).(func(context.Context, *types.RequestLoadSnapshotChunk) *types.ResponseLoadSnapshotChunk); ok {
+ r0 = rf(_a0, _a1)
+ } else {
+ if ret.Get(0) != nil {
+ r0 = ret.Get(0).(*types.ResponseLoadSnapshotChunk)
+ }
+ }
+
+ var r1 error
+ if rf, ok := ret.Get(1).(func(context.Context, *types.RequestLoadSnapshotChunk) error); ok {
+ r1 = rf(_a0, _a1)
+ } else {
+ r1 = ret.Error(1)
+ }
+
+ return r0, r1
+}
+
+// OfferSnapshot provides a mock function with given fields: _a0, _a1
+func (_m *Application) OfferSnapshot(_a0 context.Context, _a1 *types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error) {
+ ret := _m.Called(_a0, _a1)
+
+ var r0 *types.ResponseOfferSnapshot
+ if rf, ok := ret.Get(0).(func(context.Context, *types.RequestOfferSnapshot) *types.ResponseOfferSnapshot); ok {
+ r0 = rf(_a0, _a1)
+ } else {
+ if ret.Get(0) != nil {
+ r0 = ret.Get(0).(*types.ResponseOfferSnapshot)
+ }
+ }
+
+ var r1 error
+ if rf, ok := ret.Get(1).(func(context.Context, *types.RequestOfferSnapshot) error); ok {
+ r1 = rf(_a0, _a1)
+ } else {
+ r1 = ret.Error(1)
+ }
+
+ return r0, r1
+}
+
+// PrepareProposal provides a mock function with given fields: _a0, _a1
+func (_m *Application) PrepareProposal(_a0 context.Context, _a1 *types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error) {
+ ret := _m.Called(_a0, _a1)
+
+ var r0 *types.ResponsePrepareProposal
+ if rf, ok := ret.Get(0).(func(context.Context, *types.RequestPrepareProposal) *types.ResponsePrepareProposal); ok {
+ r0 = rf(_a0, _a1)
+ } else {
+ if ret.Get(0) != nil {
+ r0 = ret.Get(0).(*types.ResponsePrepareProposal)
+ }
+ }
+
+ var r1 error
+ if rf, ok := ret.Get(1).(func(context.Context, *types.RequestPrepareProposal) error); ok {
+ r1 = rf(_a0, _a1)
+ } else {
+ r1 = ret.Error(1)
+ }
+
+ return r0, r1
+}
+
+// ProcessProposal provides a mock function with given fields: _a0, _a1
+func (_m *Application) ProcessProposal(_a0 context.Context, _a1 *types.RequestProcessProposal) (*types.ResponseProcessProposal, error) {
+ ret := _m.Called(_a0, _a1)
+
+ var r0 *types.ResponseProcessProposal
+ if rf, ok := ret.Get(0).(func(context.Context, *types.RequestProcessProposal) *types.ResponseProcessProposal); ok {
+ r0 = rf(_a0, _a1)
+ } else {
+ if ret.Get(0) != nil {
+ r0 = ret.Get(0).(*types.ResponseProcessProposal)
+ }
+ }
+
+ var r1 error
+ if rf, ok := ret.Get(1).(func(context.Context, *types.RequestProcessProposal) error); ok {
+ r1 = rf(_a0, _a1)
+ } else {
+ r1 = ret.Error(1)
+ }
+
+ return r0, r1
+}
+
+// Query provides a mock function with given fields: _a0, _a1
+func (_m *Application) Query(_a0 context.Context, _a1 *types.RequestQuery) (*types.ResponseQuery, error) {
+ ret := _m.Called(_a0, _a1)
+
+ var r0 *types.ResponseQuery
+ if rf, ok := ret.Get(0).(func(context.Context, *types.RequestQuery) *types.ResponseQuery); ok {
+ r0 = rf(_a0, _a1)
+ } else {
+ if ret.Get(0) != nil {
+ r0 = ret.Get(0).(*types.ResponseQuery)
+ }
+ }
+
+ var r1 error
+ if rf, ok := ret.Get(1).(func(context.Context, *types.RequestQuery) error); ok {
+ r1 = rf(_a0, _a1)
+ } else {
+ r1 = ret.Error(1)
+ }
+
+ return r0, r1
+}
+
+// VerifyVoteExtension provides a mock function with given fields: _a0, _a1
+func (_m *Application) VerifyVoteExtension(_a0 context.Context, _a1 *types.RequestVerifyVoteExtension) (*types.ResponseVerifyVoteExtension, error) {
+ ret := _m.Called(_a0, _a1)
+
+ var r0 *types.ResponseVerifyVoteExtension
+ if rf, ok := ret.Get(0).(func(context.Context, *types.RequestVerifyVoteExtension) *types.ResponseVerifyVoteExtension); ok {
+ r0 = rf(_a0, _a1)
+ } else {
+ if ret.Get(0) != nil {
+ r0 = ret.Get(0).(*types.ResponseVerifyVoteExtension)
+ }
+ }
+
+ var r1 error
+ if rf, ok := ret.Get(1).(func(context.Context, *types.RequestVerifyVoteExtension) error); ok {
+ r1 = rf(_a0, _a1)
+ } else {
+ r1 = ret.Error(1)
+ }
+
+ return r0, r1
+}
+
+// NewApplication creates a new instance of Application. It also registers the testing.TB interface on the mock and a cleanup function to assert the mocks expectations.
+func NewApplication(t testing.TB) *Application {
+ mock := &Application{}
+ mock.Mock.Test(t)
+
+ t.Cleanup(func() { mock.AssertExpectations(t) })
+
+ return mock
+}
diff --git a/abci/types/result.go b/abci/types/types.go
similarity index 59%
rename from abci/types/result.go
rename to abci/types/types.go
index dba6bfd159..d13947d1a9 100644
--- a/abci/types/result.go
+++ b/abci/types/types.go
@@ -31,6 +31,16 @@ func (r ResponseDeliverTx) IsErr() bool {
return r.Code != CodeTypeOK
}
+// IsOK returns true if Code is OK.
+func (r ExecTxResult) IsOK() bool {
+ return r.Code == CodeTypeOK
+}
+
+// IsErr returns true if Code is something other than OK.
+func (r ExecTxResult) IsErr() bool {
+ return r.Code != CodeTypeOK
+}
+
// IsOK returns true if Code is OK.
func (r ResponseQuery) IsOK() bool {
return r.Code == CodeTypeOK
@@ -41,6 +51,29 @@ func (r ResponseQuery) IsErr() bool {
return r.Code != CodeTypeOK
}
+func (r ResponseProcessProposal) IsAccepted() bool {
+ return r.Status == ResponseProcessProposal_ACCEPT
+}
+
+func (r ResponseProcessProposal) IsStatusUnknown() bool {
+ return r.Status == ResponseProcessProposal_UNKNOWN
+}
+
+// IsStatusUnknown returns true if Code is Unknown
+func (r ResponseVerifyVoteExtension) IsStatusUnknown() bool {
+ return r.Status == ResponseVerifyVoteExtension_UNKNOWN
+}
+
+// IsOK returns true if Code is OK
+func (r ResponseVerifyVoteExtension) IsOK() bool {
+ return r.Status == ResponseVerifyVoteExtension_ACCEPT
+}
+
+// IsErr returns true if Code is something other than OK.
+func (r ResponseVerifyVoteExtension) IsErr() bool {
+ return r.Status != ResponseVerifyVoteExtension_ACCEPT
+}
+
//---------------------------------------------------------------------------
// override JSON marshaling so we emit defaults (ie. disable omitempty)
@@ -118,3 +151,44 @@ var _ jsonRoundTripper = (*ResponseDeliverTx)(nil)
var _ jsonRoundTripper = (*ResponseCheckTx)(nil)
var _ jsonRoundTripper = (*EventAttribute)(nil)
+
+// -----------------------------------------------
+// construct Result data
+
+func RespondVerifyVoteExtension(ok bool) ResponseVerifyVoteExtension {
+ status := ResponseVerifyVoteExtension_REJECT
+ if ok {
+ status = ResponseVerifyVoteExtension_ACCEPT
+ }
+ return ResponseVerifyVoteExtension{
+ Status: status,
+ }
+}
+
+// deterministicExecTxResult constructs a copy of response that omits
+// non-deterministic fields. The input response is not modified.
+func deterministicExecTxResult(response *ExecTxResult) *ExecTxResult {
+ return &ExecTxResult{
+ Code: response.Code,
+ Data: response.Data,
+ GasWanted: response.GasWanted,
+ GasUsed: response.GasUsed,
+ }
+}
+
+// MarshalTxResults encodes the the TxResults as a list of byte
+// slices. It strips off the non-deterministic pieces of the TxResults
+// so that the resulting data can be used for hash comparisons and used
+// in Merkle proofs.
+func MarshalTxResults(r []*ExecTxResult) ([][]byte, error) {
+ s := make([][]byte, len(r))
+ for i, e := range r {
+ d := deterministicExecTxResult(e)
+ b, err := d.Marshal()
+ if err != nil {
+ return nil, err
+ }
+ s[i] = b
+ }
+ return s, nil
+}
diff --git a/abci/types/types.pb.go b/abci/types/types.pb.go
index 0a290664bc..0d6fc9cd68 100644
--- a/abci/types/types.pb.go
+++ b/abci/types/types.pb.go
@@ -58,31 +58,31 @@ func (CheckTxType) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_252557cfdd89a31a, []int{0}
}
-type EvidenceType int32
+type MisbehaviorType int32
const (
- EvidenceType_UNKNOWN EvidenceType = 0
- EvidenceType_DUPLICATE_VOTE EvidenceType = 1
- EvidenceType_LIGHT_CLIENT_ATTACK EvidenceType = 2
+ MisbehaviorType_UNKNOWN MisbehaviorType = 0
+ MisbehaviorType_DUPLICATE_VOTE MisbehaviorType = 1
+ MisbehaviorType_LIGHT_CLIENT_ATTACK MisbehaviorType = 2
)
-var EvidenceType_name = map[int32]string{
+var MisbehaviorType_name = map[int32]string{
0: "UNKNOWN",
1: "DUPLICATE_VOTE",
2: "LIGHT_CLIENT_ATTACK",
}
-var EvidenceType_value = map[string]int32{
+var MisbehaviorType_value = map[string]int32{
"UNKNOWN": 0,
"DUPLICATE_VOTE": 1,
"LIGHT_CLIENT_ATTACK": 2,
}
-func (x EvidenceType) String() string {
- return proto.EnumName(EvidenceType_name, int32(x))
+func (x MisbehaviorType) String() string {
+ return proto.EnumName(MisbehaviorType_name, int32(x))
}
-func (EvidenceType) EnumDescriptor() ([]byte, []int) {
+func (MisbehaviorType) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_252557cfdd89a31a, []int{1}
}
@@ -120,7 +120,7 @@ func (x ResponseOfferSnapshot_Result) String() string {
}
func (ResponseOfferSnapshot_Result) EnumDescriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{28, 0}
+ return fileDescriptor_252557cfdd89a31a, []int{33, 0}
}
type ResponseApplySnapshotChunk_Result int32
@@ -157,7 +157,95 @@ func (x ResponseApplySnapshotChunk_Result) String() string {
}
func (ResponseApplySnapshotChunk_Result) EnumDescriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{30, 0}
+ return fileDescriptor_252557cfdd89a31a, []int{35, 0}
+}
+
+type ResponseProcessProposal_ProposalStatus int32
+
+const (
+ ResponseProcessProposal_UNKNOWN ResponseProcessProposal_ProposalStatus = 0
+ ResponseProcessProposal_ACCEPT ResponseProcessProposal_ProposalStatus = 1
+ ResponseProcessProposal_REJECT ResponseProcessProposal_ProposalStatus = 2
+)
+
+var ResponseProcessProposal_ProposalStatus_name = map[int32]string{
+ 0: "UNKNOWN",
+ 1: "ACCEPT",
+ 2: "REJECT",
+}
+
+var ResponseProcessProposal_ProposalStatus_value = map[string]int32{
+ "UNKNOWN": 0,
+ "ACCEPT": 1,
+ "REJECT": 2,
+}
+
+func (x ResponseProcessProposal_ProposalStatus) String() string {
+ return proto.EnumName(ResponseProcessProposal_ProposalStatus_name, int32(x))
+}
+
+func (ResponseProcessProposal_ProposalStatus) EnumDescriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{37, 0}
+}
+
+type ResponseVerifyVoteExtension_VerifyStatus int32
+
+const (
+ ResponseVerifyVoteExtension_UNKNOWN ResponseVerifyVoteExtension_VerifyStatus = 0
+ ResponseVerifyVoteExtension_ACCEPT ResponseVerifyVoteExtension_VerifyStatus = 1
+ ResponseVerifyVoteExtension_REJECT ResponseVerifyVoteExtension_VerifyStatus = 2
+)
+
+var ResponseVerifyVoteExtension_VerifyStatus_name = map[int32]string{
+ 0: "UNKNOWN",
+ 1: "ACCEPT",
+ 2: "REJECT",
+}
+
+var ResponseVerifyVoteExtension_VerifyStatus_value = map[string]int32{
+ "UNKNOWN": 0,
+ "ACCEPT": 1,
+ "REJECT": 2,
+}
+
+func (x ResponseVerifyVoteExtension_VerifyStatus) String() string {
+ return proto.EnumName(ResponseVerifyVoteExtension_VerifyStatus_name, int32(x))
+}
+
+func (ResponseVerifyVoteExtension_VerifyStatus) EnumDescriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{39, 0}
+}
+
+// TxAction contains App-provided information on what to do with a transaction that is part of a raw proposal
+type TxRecord_TxAction int32
+
+const (
+ TxRecord_UNKNOWN TxRecord_TxAction = 0
+ TxRecord_UNMODIFIED TxRecord_TxAction = 1
+ TxRecord_ADDED TxRecord_TxAction = 2
+ TxRecord_REMOVED TxRecord_TxAction = 3
+)
+
+var TxRecord_TxAction_name = map[int32]string{
+ 0: "UNKNOWN",
+ 1: "UNMODIFIED",
+ 2: "ADDED",
+ 3: "REMOVED",
+}
+
+var TxRecord_TxAction_value = map[string]int32{
+ "UNKNOWN": 0,
+ "UNMODIFIED": 1,
+ "ADDED": 2,
+ "REMOVED": 3,
+}
+
+func (x TxRecord_TxAction) String() string {
+ return proto.EnumName(TxRecord_TxAction_name, int32(x))
+}
+
+func (TxRecord_TxAction) EnumDescriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{47, 0}
}
type Request struct {
@@ -176,6 +264,11 @@ type Request struct {
// *Request_OfferSnapshot
// *Request_LoadSnapshotChunk
// *Request_ApplySnapshotChunk
+ // *Request_PrepareProposal
+ // *Request_ProcessProposal
+ // *Request_ExtendVote
+ // *Request_VerifyVoteExtension
+ // *Request_FinalizeBlock
Value isRequest_Value `protobuf_oneof:"value"`
}
@@ -260,21 +353,41 @@ type Request_LoadSnapshotChunk struct {
type Request_ApplySnapshotChunk struct {
ApplySnapshotChunk *RequestApplySnapshotChunk `protobuf:"bytes,14,opt,name=apply_snapshot_chunk,json=applySnapshotChunk,proto3,oneof" json:"apply_snapshot_chunk,omitempty"`
}
-
-func (*Request_Echo) isRequest_Value() {}
-func (*Request_Flush) isRequest_Value() {}
-func (*Request_Info) isRequest_Value() {}
-func (*Request_InitChain) isRequest_Value() {}
-func (*Request_Query) isRequest_Value() {}
-func (*Request_BeginBlock) isRequest_Value() {}
-func (*Request_CheckTx) isRequest_Value() {}
-func (*Request_DeliverTx) isRequest_Value() {}
-func (*Request_EndBlock) isRequest_Value() {}
-func (*Request_Commit) isRequest_Value() {}
-func (*Request_ListSnapshots) isRequest_Value() {}
-func (*Request_OfferSnapshot) isRequest_Value() {}
-func (*Request_LoadSnapshotChunk) isRequest_Value() {}
-func (*Request_ApplySnapshotChunk) isRequest_Value() {}
+type Request_PrepareProposal struct {
+ PrepareProposal *RequestPrepareProposal `protobuf:"bytes,15,opt,name=prepare_proposal,json=prepareProposal,proto3,oneof" json:"prepare_proposal,omitempty"`
+}
+type Request_ProcessProposal struct {
+ ProcessProposal *RequestProcessProposal `protobuf:"bytes,16,opt,name=process_proposal,json=processProposal,proto3,oneof" json:"process_proposal,omitempty"`
+}
+type Request_ExtendVote struct {
+ ExtendVote *RequestExtendVote `protobuf:"bytes,17,opt,name=extend_vote,json=extendVote,proto3,oneof" json:"extend_vote,omitempty"`
+}
+type Request_VerifyVoteExtension struct {
+ VerifyVoteExtension *RequestVerifyVoteExtension `protobuf:"bytes,18,opt,name=verify_vote_extension,json=verifyVoteExtension,proto3,oneof" json:"verify_vote_extension,omitempty"`
+}
+type Request_FinalizeBlock struct {
+ FinalizeBlock *RequestFinalizeBlock `protobuf:"bytes,19,opt,name=finalize_block,json=finalizeBlock,proto3,oneof" json:"finalize_block,omitempty"`
+}
+
+func (*Request_Echo) isRequest_Value() {}
+func (*Request_Flush) isRequest_Value() {}
+func (*Request_Info) isRequest_Value() {}
+func (*Request_InitChain) isRequest_Value() {}
+func (*Request_Query) isRequest_Value() {}
+func (*Request_BeginBlock) isRequest_Value() {}
+func (*Request_CheckTx) isRequest_Value() {}
+func (*Request_DeliverTx) isRequest_Value() {}
+func (*Request_EndBlock) isRequest_Value() {}
+func (*Request_Commit) isRequest_Value() {}
+func (*Request_ListSnapshots) isRequest_Value() {}
+func (*Request_OfferSnapshot) isRequest_Value() {}
+func (*Request_LoadSnapshotChunk) isRequest_Value() {}
+func (*Request_ApplySnapshotChunk) isRequest_Value() {}
+func (*Request_PrepareProposal) isRequest_Value() {}
+func (*Request_ProcessProposal) isRequest_Value() {}
+func (*Request_ExtendVote) isRequest_Value() {}
+func (*Request_VerifyVoteExtension) isRequest_Value() {}
+func (*Request_FinalizeBlock) isRequest_Value() {}
func (m *Request) GetValue() isRequest_Value {
if m != nil {
@@ -318,6 +431,7 @@ func (m *Request) GetQuery() *RequestQuery {
return nil
}
+// Deprecated: Do not use.
func (m *Request) GetBeginBlock() *RequestBeginBlock {
if x, ok := m.GetValue().(*Request_BeginBlock); ok {
return x.BeginBlock
@@ -332,6 +446,7 @@ func (m *Request) GetCheckTx() *RequestCheckTx {
return nil
}
+// Deprecated: Do not use.
func (m *Request) GetDeliverTx() *RequestDeliverTx {
if x, ok := m.GetValue().(*Request_DeliverTx); ok {
return x.DeliverTx
@@ -339,6 +454,7 @@ func (m *Request) GetDeliverTx() *RequestDeliverTx {
return nil
}
+// Deprecated: Do not use.
func (m *Request) GetEndBlock() *RequestEndBlock {
if x, ok := m.GetValue().(*Request_EndBlock); ok {
return x.EndBlock
@@ -381,6 +497,41 @@ func (m *Request) GetApplySnapshotChunk() *RequestApplySnapshotChunk {
return nil
}
+func (m *Request) GetPrepareProposal() *RequestPrepareProposal {
+ if x, ok := m.GetValue().(*Request_PrepareProposal); ok {
+ return x.PrepareProposal
+ }
+ return nil
+}
+
+func (m *Request) GetProcessProposal() *RequestProcessProposal {
+ if x, ok := m.GetValue().(*Request_ProcessProposal); ok {
+ return x.ProcessProposal
+ }
+ return nil
+}
+
+func (m *Request) GetExtendVote() *RequestExtendVote {
+ if x, ok := m.GetValue().(*Request_ExtendVote); ok {
+ return x.ExtendVote
+ }
+ return nil
+}
+
+func (m *Request) GetVerifyVoteExtension() *RequestVerifyVoteExtension {
+ if x, ok := m.GetValue().(*Request_VerifyVoteExtension); ok {
+ return x.VerifyVoteExtension
+ }
+ return nil
+}
+
+func (m *Request) GetFinalizeBlock() *RequestFinalizeBlock {
+ if x, ok := m.GetValue().(*Request_FinalizeBlock); ok {
+ return x.FinalizeBlock
+ }
+ return nil
+}
+
// XXX_OneofWrappers is for the internal use of the proto package.
func (*Request) XXX_OneofWrappers() []interface{} {
return []interface{}{
@@ -398,6 +549,11 @@ func (*Request) XXX_OneofWrappers() []interface{} {
(*Request_OfferSnapshot)(nil),
(*Request_LoadSnapshotChunk)(nil),
(*Request_ApplySnapshotChunk)(nil),
+ (*Request_PrepareProposal)(nil),
+ (*Request_ProcessProposal)(nil),
+ (*Request_ExtendVote)(nil),
+ (*Request_VerifyVoteExtension)(nil),
+ (*Request_FinalizeBlock)(nil),
}
}
@@ -710,10 +866,10 @@ func (m *RequestQuery) GetProve() bool {
}
type RequestBeginBlock struct {
- Hash []byte `protobuf:"bytes,1,opt,name=hash,proto3" json:"hash,omitempty"`
- Header types1.Header `protobuf:"bytes,2,opt,name=header,proto3" json:"header"`
- LastCommitInfo LastCommitInfo `protobuf:"bytes,3,opt,name=last_commit_info,json=lastCommitInfo,proto3" json:"last_commit_info"`
- ByzantineValidators []Evidence `protobuf:"bytes,4,rep,name=byzantine_validators,json=byzantineValidators,proto3" json:"byzantine_validators"`
+ Hash []byte `protobuf:"bytes,1,opt,name=hash,proto3" json:"hash,omitempty"`
+ Header types1.Header `protobuf:"bytes,2,opt,name=header,proto3" json:"header"`
+ LastCommitInfo CommitInfo `protobuf:"bytes,3,opt,name=last_commit_info,json=lastCommitInfo,proto3" json:"last_commit_info"`
+ ByzantineValidators []Misbehavior `protobuf:"bytes,4,rep,name=byzantine_validators,json=byzantineValidators,proto3" json:"byzantine_validators"`
}
func (m *RequestBeginBlock) Reset() { *m = RequestBeginBlock{} }
@@ -763,14 +919,14 @@ func (m *RequestBeginBlock) GetHeader() types1.Header {
return types1.Header{}
}
-func (m *RequestBeginBlock) GetLastCommitInfo() LastCommitInfo {
+func (m *RequestBeginBlock) GetLastCommitInfo() CommitInfo {
if m != nil {
return m.LastCommitInfo
}
- return LastCommitInfo{}
+ return CommitInfo{}
}
-func (m *RequestBeginBlock) GetByzantineValidators() []Evidence {
+func (m *RequestBeginBlock) GetByzantineValidators() []Misbehavior {
if m != nil {
return m.ByzantineValidators
}
@@ -1165,38 +1321,32 @@ func (m *RequestApplySnapshotChunk) GetSender() string {
return ""
}
-type Response struct {
- // Types that are valid to be assigned to Value:
- // *Response_Exception
- // *Response_Echo
- // *Response_Flush
- // *Response_Info
- // *Response_InitChain
- // *Response_Query
- // *Response_BeginBlock
- // *Response_CheckTx
- // *Response_DeliverTx
- // *Response_EndBlock
- // *Response_Commit
- // *Response_ListSnapshots
- // *Response_OfferSnapshot
- // *Response_LoadSnapshotChunk
- // *Response_ApplySnapshotChunk
- Value isResponse_Value `protobuf_oneof:"value"`
-}
-
-func (m *Response) Reset() { *m = Response{} }
-func (m *Response) String() string { return proto.CompactTextString(m) }
-func (*Response) ProtoMessage() {}
-func (*Response) Descriptor() ([]byte, []int) {
+type RequestPrepareProposal struct {
+ // the modified transactions cannot exceed this size.
+ MaxTxBytes int64 `protobuf:"varint,1,opt,name=max_tx_bytes,json=maxTxBytes,proto3" json:"max_tx_bytes,omitempty"`
+ // txs is an array of transactions that will be included in a block,
+ // sent to the app for possible modifications.
+ Txs [][]byte `protobuf:"bytes,2,rep,name=txs,proto3" json:"txs,omitempty"`
+ LocalLastCommit ExtendedCommitInfo `protobuf:"bytes,3,opt,name=local_last_commit,json=localLastCommit,proto3" json:"local_last_commit"`
+ ByzantineValidators []Misbehavior `protobuf:"bytes,4,rep,name=byzantine_validators,json=byzantineValidators,proto3" json:"byzantine_validators"`
+ Height int64 `protobuf:"varint,5,opt,name=height,proto3" json:"height,omitempty"`
+ Time time.Time `protobuf:"bytes,6,opt,name=time,proto3,stdtime" json:"time"`
+ NextValidatorsHash []byte `protobuf:"bytes,7,opt,name=next_validators_hash,json=nextValidatorsHash,proto3" json:"next_validators_hash,omitempty"`
+ ProposerProTxHash []byte `protobuf:"bytes,8,opt,name=proposer_pro_tx_hash,json=proposerProTxHash,proto3" json:"proposer_pro_tx_hash,omitempty"`
+}
+
+func (m *RequestPrepareProposal) Reset() { *m = RequestPrepareProposal{} }
+func (m *RequestPrepareProposal) String() string { return proto.CompactTextString(m) }
+func (*RequestPrepareProposal) ProtoMessage() {}
+func (*RequestPrepareProposal) Descriptor() ([]byte, []int) {
return fileDescriptor_252557cfdd89a31a, []int{15}
}
-func (m *Response) XXX_Unmarshal(b []byte) error {
+func (m *RequestPrepareProposal) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
-func (m *Response) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+func (m *RequestPrepareProposal) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
- return xxx_messageInfo_Response.Marshal(b, m, deterministic)
+ return xxx_messageInfo_RequestPrepareProposal.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
@@ -1206,236 +1356,193 @@ func (m *Response) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return b[:n], nil
}
}
-func (m *Response) XXX_Merge(src proto.Message) {
- xxx_messageInfo_Response.Merge(m, src)
+func (m *RequestPrepareProposal) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_RequestPrepareProposal.Merge(m, src)
}
-func (m *Response) XXX_Size() int {
+func (m *RequestPrepareProposal) XXX_Size() int {
return m.Size()
}
-func (m *Response) XXX_DiscardUnknown() {
- xxx_messageInfo_Response.DiscardUnknown(m)
+func (m *RequestPrepareProposal) XXX_DiscardUnknown() {
+ xxx_messageInfo_RequestPrepareProposal.DiscardUnknown(m)
}
-var xxx_messageInfo_Response proto.InternalMessageInfo
-
-type isResponse_Value interface {
- isResponse_Value()
- MarshalTo([]byte) (int, error)
- Size() int
-}
+var xxx_messageInfo_RequestPrepareProposal proto.InternalMessageInfo
-type Response_Exception struct {
- Exception *ResponseException `protobuf:"bytes,1,opt,name=exception,proto3,oneof" json:"exception,omitempty"`
-}
-type Response_Echo struct {
- Echo *ResponseEcho `protobuf:"bytes,2,opt,name=echo,proto3,oneof" json:"echo,omitempty"`
-}
-type Response_Flush struct {
- Flush *ResponseFlush `protobuf:"bytes,3,opt,name=flush,proto3,oneof" json:"flush,omitempty"`
-}
-type Response_Info struct {
- Info *ResponseInfo `protobuf:"bytes,4,opt,name=info,proto3,oneof" json:"info,omitempty"`
-}
-type Response_InitChain struct {
- InitChain *ResponseInitChain `protobuf:"bytes,5,opt,name=init_chain,json=initChain,proto3,oneof" json:"init_chain,omitempty"`
-}
-type Response_Query struct {
- Query *ResponseQuery `protobuf:"bytes,6,opt,name=query,proto3,oneof" json:"query,omitempty"`
-}
-type Response_BeginBlock struct {
- BeginBlock *ResponseBeginBlock `protobuf:"bytes,7,opt,name=begin_block,json=beginBlock,proto3,oneof" json:"begin_block,omitempty"`
-}
-type Response_CheckTx struct {
- CheckTx *ResponseCheckTx `protobuf:"bytes,8,opt,name=check_tx,json=checkTx,proto3,oneof" json:"check_tx,omitempty"`
-}
-type Response_DeliverTx struct {
- DeliverTx *ResponseDeliverTx `protobuf:"bytes,9,opt,name=deliver_tx,json=deliverTx,proto3,oneof" json:"deliver_tx,omitempty"`
-}
-type Response_EndBlock struct {
- EndBlock *ResponseEndBlock `protobuf:"bytes,10,opt,name=end_block,json=endBlock,proto3,oneof" json:"end_block,omitempty"`
-}
-type Response_Commit struct {
- Commit *ResponseCommit `protobuf:"bytes,11,opt,name=commit,proto3,oneof" json:"commit,omitempty"`
-}
-type Response_ListSnapshots struct {
- ListSnapshots *ResponseListSnapshots `protobuf:"bytes,12,opt,name=list_snapshots,json=listSnapshots,proto3,oneof" json:"list_snapshots,omitempty"`
-}
-type Response_OfferSnapshot struct {
- OfferSnapshot *ResponseOfferSnapshot `protobuf:"bytes,13,opt,name=offer_snapshot,json=offerSnapshot,proto3,oneof" json:"offer_snapshot,omitempty"`
-}
-type Response_LoadSnapshotChunk struct {
- LoadSnapshotChunk *ResponseLoadSnapshotChunk `protobuf:"bytes,14,opt,name=load_snapshot_chunk,json=loadSnapshotChunk,proto3,oneof" json:"load_snapshot_chunk,omitempty"`
-}
-type Response_ApplySnapshotChunk struct {
- ApplySnapshotChunk *ResponseApplySnapshotChunk `protobuf:"bytes,15,opt,name=apply_snapshot_chunk,json=applySnapshotChunk,proto3,oneof" json:"apply_snapshot_chunk,omitempty"`
+func (m *RequestPrepareProposal) GetMaxTxBytes() int64 {
+ if m != nil {
+ return m.MaxTxBytes
+ }
+ return 0
}
-func (*Response_Exception) isResponse_Value() {}
-func (*Response_Echo) isResponse_Value() {}
-func (*Response_Flush) isResponse_Value() {}
-func (*Response_Info) isResponse_Value() {}
-func (*Response_InitChain) isResponse_Value() {}
-func (*Response_Query) isResponse_Value() {}
-func (*Response_BeginBlock) isResponse_Value() {}
-func (*Response_CheckTx) isResponse_Value() {}
-func (*Response_DeliverTx) isResponse_Value() {}
-func (*Response_EndBlock) isResponse_Value() {}
-func (*Response_Commit) isResponse_Value() {}
-func (*Response_ListSnapshots) isResponse_Value() {}
-func (*Response_OfferSnapshot) isResponse_Value() {}
-func (*Response_LoadSnapshotChunk) isResponse_Value() {}
-func (*Response_ApplySnapshotChunk) isResponse_Value() {}
-
-func (m *Response) GetValue() isResponse_Value {
+func (m *RequestPrepareProposal) GetTxs() [][]byte {
if m != nil {
- return m.Value
+ return m.Txs
}
return nil
}
-func (m *Response) GetException() *ResponseException {
- if x, ok := m.GetValue().(*Response_Exception); ok {
- return x.Exception
+func (m *RequestPrepareProposal) GetLocalLastCommit() ExtendedCommitInfo {
+ if m != nil {
+ return m.LocalLastCommit
}
- return nil
+ return ExtendedCommitInfo{}
}
-func (m *Response) GetEcho() *ResponseEcho {
- if x, ok := m.GetValue().(*Response_Echo); ok {
- return x.Echo
+func (m *RequestPrepareProposal) GetByzantineValidators() []Misbehavior {
+ if m != nil {
+ return m.ByzantineValidators
}
return nil
}
-func (m *Response) GetFlush() *ResponseFlush {
- if x, ok := m.GetValue().(*Response_Flush); ok {
- return x.Flush
+func (m *RequestPrepareProposal) GetHeight() int64 {
+ if m != nil {
+ return m.Height
}
- return nil
+ return 0
}
-func (m *Response) GetInfo() *ResponseInfo {
- if x, ok := m.GetValue().(*Response_Info); ok {
- return x.Info
+func (m *RequestPrepareProposal) GetTime() time.Time {
+ if m != nil {
+ return m.Time
}
- return nil
+ return time.Time{}
}
-func (m *Response) GetInitChain() *ResponseInitChain {
- if x, ok := m.GetValue().(*Response_InitChain); ok {
- return x.InitChain
+func (m *RequestPrepareProposal) GetNextValidatorsHash() []byte {
+ if m != nil {
+ return m.NextValidatorsHash
}
return nil
}
-func (m *Response) GetQuery() *ResponseQuery {
- if x, ok := m.GetValue().(*Response_Query); ok {
- return x.Query
+func (m *RequestPrepareProposal) GetProposerProTxHash() []byte {
+ if m != nil {
+ return m.ProposerProTxHash
}
return nil
}
-func (m *Response) GetBeginBlock() *ResponseBeginBlock {
- if x, ok := m.GetValue().(*Response_BeginBlock); ok {
- return x.BeginBlock
+type RequestProcessProposal struct {
+ Txs [][]byte `protobuf:"bytes,1,rep,name=txs,proto3" json:"txs,omitempty"`
+ ProposedLastCommit CommitInfo `protobuf:"bytes,2,opt,name=proposed_last_commit,json=proposedLastCommit,proto3" json:"proposed_last_commit"`
+ ByzantineValidators []Misbehavior `protobuf:"bytes,3,rep,name=byzantine_validators,json=byzantineValidators,proto3" json:"byzantine_validators"`
+ // hash is the merkle root hash of the fields of the proposed block.
+ Hash []byte `protobuf:"bytes,4,opt,name=hash,proto3" json:"hash,omitempty"`
+ Height int64 `protobuf:"varint,5,opt,name=height,proto3" json:"height,omitempty"`
+ Time time.Time `protobuf:"bytes,6,opt,name=time,proto3,stdtime" json:"time"`
+ NextValidatorsHash []byte `protobuf:"bytes,7,opt,name=next_validators_hash,json=nextValidatorsHash,proto3" json:"next_validators_hash,omitempty"`
+ ProposerProTxHash []byte `protobuf:"bytes,8,opt,name=proposer_pro_tx_hash,json=proposerProTxHash,proto3" json:"proposer_pro_tx_hash,omitempty"`
+}
+
+func (m *RequestProcessProposal) Reset() { *m = RequestProcessProposal{} }
+func (m *RequestProcessProposal) String() string { return proto.CompactTextString(m) }
+func (*RequestProcessProposal) ProtoMessage() {}
+func (*RequestProcessProposal) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{16}
+}
+func (m *RequestProcessProposal) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *RequestProcessProposal) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_RequestProcessProposal.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
}
- return nil
+}
+func (m *RequestProcessProposal) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_RequestProcessProposal.Merge(m, src)
+}
+func (m *RequestProcessProposal) XXX_Size() int {
+ return m.Size()
+}
+func (m *RequestProcessProposal) XXX_DiscardUnknown() {
+ xxx_messageInfo_RequestProcessProposal.DiscardUnknown(m)
}
-func (m *Response) GetCheckTx() *ResponseCheckTx {
- if x, ok := m.GetValue().(*Response_CheckTx); ok {
- return x.CheckTx
+var xxx_messageInfo_RequestProcessProposal proto.InternalMessageInfo
+
+func (m *RequestProcessProposal) GetTxs() [][]byte {
+ if m != nil {
+ return m.Txs
}
return nil
}
-func (m *Response) GetDeliverTx() *ResponseDeliverTx {
- if x, ok := m.GetValue().(*Response_DeliverTx); ok {
- return x.DeliverTx
+func (m *RequestProcessProposal) GetProposedLastCommit() CommitInfo {
+ if m != nil {
+ return m.ProposedLastCommit
}
- return nil
+ return CommitInfo{}
}
-func (m *Response) GetEndBlock() *ResponseEndBlock {
- if x, ok := m.GetValue().(*Response_EndBlock); ok {
- return x.EndBlock
+func (m *RequestProcessProposal) GetByzantineValidators() []Misbehavior {
+ if m != nil {
+ return m.ByzantineValidators
}
return nil
}
-func (m *Response) GetCommit() *ResponseCommit {
- if x, ok := m.GetValue().(*Response_Commit); ok {
- return x.Commit
+func (m *RequestProcessProposal) GetHash() []byte {
+ if m != nil {
+ return m.Hash
}
return nil
}
-func (m *Response) GetListSnapshots() *ResponseListSnapshots {
- if x, ok := m.GetValue().(*Response_ListSnapshots); ok {
- return x.ListSnapshots
+func (m *RequestProcessProposal) GetHeight() int64 {
+ if m != nil {
+ return m.Height
}
- return nil
+ return 0
}
-func (m *Response) GetOfferSnapshot() *ResponseOfferSnapshot {
- if x, ok := m.GetValue().(*Response_OfferSnapshot); ok {
- return x.OfferSnapshot
+func (m *RequestProcessProposal) GetTime() time.Time {
+ if m != nil {
+ return m.Time
}
- return nil
+ return time.Time{}
}
-func (m *Response) GetLoadSnapshotChunk() *ResponseLoadSnapshotChunk {
- if x, ok := m.GetValue().(*Response_LoadSnapshotChunk); ok {
- return x.LoadSnapshotChunk
+func (m *RequestProcessProposal) GetNextValidatorsHash() []byte {
+ if m != nil {
+ return m.NextValidatorsHash
}
return nil
}
-func (m *Response) GetApplySnapshotChunk() *ResponseApplySnapshotChunk {
- if x, ok := m.GetValue().(*Response_ApplySnapshotChunk); ok {
- return x.ApplySnapshotChunk
+func (m *RequestProcessProposal) GetProposerProTxHash() []byte {
+ if m != nil {
+ return m.ProposerProTxHash
}
return nil
}
-// XXX_OneofWrappers is for the internal use of the proto package.
-func (*Response) XXX_OneofWrappers() []interface{} {
- return []interface{}{
- (*Response_Exception)(nil),
- (*Response_Echo)(nil),
- (*Response_Flush)(nil),
- (*Response_Info)(nil),
- (*Response_InitChain)(nil),
- (*Response_Query)(nil),
- (*Response_BeginBlock)(nil),
- (*Response_CheckTx)(nil),
- (*Response_DeliverTx)(nil),
- (*Response_EndBlock)(nil),
- (*Response_Commit)(nil),
- (*Response_ListSnapshots)(nil),
- (*Response_OfferSnapshot)(nil),
- (*Response_LoadSnapshotChunk)(nil),
- (*Response_ApplySnapshotChunk)(nil),
- }
-}
-
-// nondeterministic
-type ResponseException struct {
- Error string `protobuf:"bytes,1,opt,name=error,proto3" json:"error,omitempty"`
+// Extends a vote with application-side injection
+type RequestExtendVote struct {
+ Hash []byte `protobuf:"bytes,1,opt,name=hash,proto3" json:"hash,omitempty"`
+ Height int64 `protobuf:"varint,2,opt,name=height,proto3" json:"height,omitempty"`
}
-func (m *ResponseException) Reset() { *m = ResponseException{} }
-func (m *ResponseException) String() string { return proto.CompactTextString(m) }
-func (*ResponseException) ProtoMessage() {}
-func (*ResponseException) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{16}
+func (m *RequestExtendVote) Reset() { *m = RequestExtendVote{} }
+func (m *RequestExtendVote) String() string { return proto.CompactTextString(m) }
+func (*RequestExtendVote) ProtoMessage() {}
+func (*RequestExtendVote) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{17}
}
-func (m *ResponseException) XXX_Unmarshal(b []byte) error {
+func (m *RequestExtendVote) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
-func (m *ResponseException) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+func (m *RequestExtendVote) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
- return xxx_messageInfo_ResponseException.Marshal(b, m, deterministic)
+ return xxx_messageInfo_RequestExtendVote.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
@@ -1445,126 +1552,52 @@ func (m *ResponseException) XXX_Marshal(b []byte, deterministic bool) ([]byte, e
return b[:n], nil
}
}
-func (m *ResponseException) XXX_Merge(src proto.Message) {
- xxx_messageInfo_ResponseException.Merge(m, src)
+func (m *RequestExtendVote) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_RequestExtendVote.Merge(m, src)
}
-func (m *ResponseException) XXX_Size() int {
+func (m *RequestExtendVote) XXX_Size() int {
return m.Size()
}
-func (m *ResponseException) XXX_DiscardUnknown() {
- xxx_messageInfo_ResponseException.DiscardUnknown(m)
+func (m *RequestExtendVote) XXX_DiscardUnknown() {
+ xxx_messageInfo_RequestExtendVote.DiscardUnknown(m)
}
-var xxx_messageInfo_ResponseException proto.InternalMessageInfo
+var xxx_messageInfo_RequestExtendVote proto.InternalMessageInfo
-func (m *ResponseException) GetError() string {
+func (m *RequestExtendVote) GetHash() []byte {
if m != nil {
- return m.Error
- }
- return ""
-}
-
-type ResponseEcho struct {
- Message string `protobuf:"bytes,1,opt,name=message,proto3" json:"message,omitempty"`
-}
-
-func (m *ResponseEcho) Reset() { *m = ResponseEcho{} }
-func (m *ResponseEcho) String() string { return proto.CompactTextString(m) }
-func (*ResponseEcho) ProtoMessage() {}
-func (*ResponseEcho) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{17}
-}
-func (m *ResponseEcho) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *ResponseEcho) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_ResponseEcho.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
+ return m.Hash
}
+ return nil
}
-func (m *ResponseEcho) XXX_Merge(src proto.Message) {
- xxx_messageInfo_ResponseEcho.Merge(m, src)
-}
-func (m *ResponseEcho) XXX_Size() int {
- return m.Size()
-}
-func (m *ResponseEcho) XXX_DiscardUnknown() {
- xxx_messageInfo_ResponseEcho.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_ResponseEcho proto.InternalMessageInfo
-func (m *ResponseEcho) GetMessage() string {
+func (m *RequestExtendVote) GetHeight() int64 {
if m != nil {
- return m.Message
+ return m.Height
}
- return ""
+ return 0
}
-type ResponseFlush struct {
+// Verify the vote extension
+type RequestVerifyVoteExtension struct {
+ Hash []byte `protobuf:"bytes,1,opt,name=hash,proto3" json:"hash,omitempty"`
+ ValidatorProTxHash []byte `protobuf:"bytes,2,opt,name=validator_pro_tx_hash,json=validatorProTxHash,proto3" json:"validator_pro_tx_hash,omitempty"`
+ Height int64 `protobuf:"varint,3,opt,name=height,proto3" json:"height,omitempty"`
+ VoteExtension []byte `protobuf:"bytes,4,opt,name=vote_extension,json=voteExtension,proto3" json:"vote_extension,omitempty"`
}
-func (m *ResponseFlush) Reset() { *m = ResponseFlush{} }
-func (m *ResponseFlush) String() string { return proto.CompactTextString(m) }
-func (*ResponseFlush) ProtoMessage() {}
-func (*ResponseFlush) Descriptor() ([]byte, []int) {
+func (m *RequestVerifyVoteExtension) Reset() { *m = RequestVerifyVoteExtension{} }
+func (m *RequestVerifyVoteExtension) String() string { return proto.CompactTextString(m) }
+func (*RequestVerifyVoteExtension) ProtoMessage() {}
+func (*RequestVerifyVoteExtension) Descriptor() ([]byte, []int) {
return fileDescriptor_252557cfdd89a31a, []int{18}
}
-func (m *ResponseFlush) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *ResponseFlush) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_ResponseFlush.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
- }
-}
-func (m *ResponseFlush) XXX_Merge(src proto.Message) {
- xxx_messageInfo_ResponseFlush.Merge(m, src)
-}
-func (m *ResponseFlush) XXX_Size() int {
- return m.Size()
-}
-func (m *ResponseFlush) XXX_DiscardUnknown() {
- xxx_messageInfo_ResponseFlush.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_ResponseFlush proto.InternalMessageInfo
-
-type ResponseInfo struct {
- Data string `protobuf:"bytes,1,opt,name=data,proto3" json:"data,omitempty"`
- // this is the software version of the application. TODO: remove?
- Version string `protobuf:"bytes,2,opt,name=version,proto3" json:"version,omitempty"`
- AppVersion uint64 `protobuf:"varint,3,opt,name=app_version,json=appVersion,proto3" json:"app_version,omitempty"`
- LastBlockHeight int64 `protobuf:"varint,4,opt,name=last_block_height,json=lastBlockHeight,proto3" json:"last_block_height,omitempty"`
- LastBlockAppHash []byte `protobuf:"bytes,5,opt,name=last_block_app_hash,json=lastBlockAppHash,proto3" json:"last_block_app_hash,omitempty"`
-}
-
-func (m *ResponseInfo) Reset() { *m = ResponseInfo{} }
-func (m *ResponseInfo) String() string { return proto.CompactTextString(m) }
-func (*ResponseInfo) ProtoMessage() {}
-func (*ResponseInfo) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{19}
-}
-func (m *ResponseInfo) XXX_Unmarshal(b []byte) error {
+func (m *RequestVerifyVoteExtension) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
-func (m *ResponseInfo) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+func (m *RequestVerifyVoteExtension) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
- return xxx_messageInfo_ResponseInfo.Marshal(b, m, deterministic)
+ return xxx_messageInfo_RequestVerifyVoteExtension.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
@@ -1574,73 +1607,70 @@ func (m *ResponseInfo) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
return b[:n], nil
}
}
-func (m *ResponseInfo) XXX_Merge(src proto.Message) {
- xxx_messageInfo_ResponseInfo.Merge(m, src)
+func (m *RequestVerifyVoteExtension) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_RequestVerifyVoteExtension.Merge(m, src)
}
-func (m *ResponseInfo) XXX_Size() int {
+func (m *RequestVerifyVoteExtension) XXX_Size() int {
return m.Size()
}
-func (m *ResponseInfo) XXX_DiscardUnknown() {
- xxx_messageInfo_ResponseInfo.DiscardUnknown(m)
+func (m *RequestVerifyVoteExtension) XXX_DiscardUnknown() {
+ xxx_messageInfo_RequestVerifyVoteExtension.DiscardUnknown(m)
}
-var xxx_messageInfo_ResponseInfo proto.InternalMessageInfo
-
-func (m *ResponseInfo) GetData() string {
- if m != nil {
- return m.Data
- }
- return ""
-}
+var xxx_messageInfo_RequestVerifyVoteExtension proto.InternalMessageInfo
-func (m *ResponseInfo) GetVersion() string {
+func (m *RequestVerifyVoteExtension) GetHash() []byte {
if m != nil {
- return m.Version
+ return m.Hash
}
- return ""
+ return nil
}
-func (m *ResponseInfo) GetAppVersion() uint64 {
+func (m *RequestVerifyVoteExtension) GetValidatorProTxHash() []byte {
if m != nil {
- return m.AppVersion
+ return m.ValidatorProTxHash
}
- return 0
+ return nil
}
-func (m *ResponseInfo) GetLastBlockHeight() int64 {
+func (m *RequestVerifyVoteExtension) GetHeight() int64 {
if m != nil {
- return m.LastBlockHeight
+ return m.Height
}
return 0
}
-func (m *ResponseInfo) GetLastBlockAppHash() []byte {
+func (m *RequestVerifyVoteExtension) GetVoteExtension() []byte {
if m != nil {
- return m.LastBlockAppHash
+ return m.VoteExtension
}
return nil
}
-type ResponseInitChain struct {
- ConsensusParams *types1.ConsensusParams `protobuf:"bytes,1,opt,name=consensus_params,json=consensusParams,proto3" json:"consensus_params,omitempty"`
- AppHash []byte `protobuf:"bytes,3,opt,name=app_hash,json=appHash,proto3" json:"app_hash,omitempty"`
- ValidatorSetUpdate ValidatorSetUpdate `protobuf:"bytes,100,opt,name=validator_set_update,json=validatorSetUpdate,proto3" json:"validator_set_update"`
- NextCoreChainLockUpdate *types1.CoreChainLock `protobuf:"bytes,101,opt,name=next_core_chain_lock_update,json=nextCoreChainLockUpdate,proto3" json:"next_core_chain_lock_update,omitempty"`
- InitialCoreHeight uint32 `protobuf:"varint,102,opt,name=initial_core_height,json=initialCoreHeight,proto3" json:"initial_core_height,omitempty"`
+type RequestFinalizeBlock struct {
+ Txs [][]byte `protobuf:"bytes,1,rep,name=txs,proto3" json:"txs,omitempty"`
+ DecidedLastCommit CommitInfo `protobuf:"bytes,2,opt,name=decided_last_commit,json=decidedLastCommit,proto3" json:"decided_last_commit"`
+ ByzantineValidators []Misbehavior `protobuf:"bytes,3,rep,name=byzantine_validators,json=byzantineValidators,proto3" json:"byzantine_validators"`
+ // hash is the merkle root hash of the fields of the proposed block.
+ Hash []byte `protobuf:"bytes,4,opt,name=hash,proto3" json:"hash,omitempty"`
+ Height int64 `protobuf:"varint,5,opt,name=height,proto3" json:"height,omitempty"`
+ Time time.Time `protobuf:"bytes,6,opt,name=time,proto3,stdtime" json:"time"`
+ NextValidatorsHash []byte `protobuf:"bytes,7,opt,name=next_validators_hash,json=nextValidatorsHash,proto3" json:"next_validators_hash,omitempty"`
+ ProposerProTxHash []byte `protobuf:"bytes,8,opt,name=proposer_pro_tx_hash,json=proposerProTxHash,proto3" json:"proposer_pro_tx_hash,omitempty"`
}
-func (m *ResponseInitChain) Reset() { *m = ResponseInitChain{} }
-func (m *ResponseInitChain) String() string { return proto.CompactTextString(m) }
-func (*ResponseInitChain) ProtoMessage() {}
-func (*ResponseInitChain) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{20}
+func (m *RequestFinalizeBlock) Reset() { *m = RequestFinalizeBlock{} }
+func (m *RequestFinalizeBlock) String() string { return proto.CompactTextString(m) }
+func (*RequestFinalizeBlock) ProtoMessage() {}
+func (*RequestFinalizeBlock) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{19}
}
-func (m *ResponseInitChain) XXX_Unmarshal(b []byte) error {
+func (m *RequestFinalizeBlock) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
-func (m *ResponseInitChain) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+func (m *RequestFinalizeBlock) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
- return xxx_messageInfo_ResponseInitChain.Marshal(b, m, deterministic)
+ return xxx_messageInfo_RequestFinalizeBlock.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
@@ -1650,78 +1680,111 @@ func (m *ResponseInitChain) XXX_Marshal(b []byte, deterministic bool) ([]byte, e
return b[:n], nil
}
}
-func (m *ResponseInitChain) XXX_Merge(src proto.Message) {
- xxx_messageInfo_ResponseInitChain.Merge(m, src)
+func (m *RequestFinalizeBlock) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_RequestFinalizeBlock.Merge(m, src)
}
-func (m *ResponseInitChain) XXX_Size() int {
+func (m *RequestFinalizeBlock) XXX_Size() int {
return m.Size()
}
-func (m *ResponseInitChain) XXX_DiscardUnknown() {
- xxx_messageInfo_ResponseInitChain.DiscardUnknown(m)
+func (m *RequestFinalizeBlock) XXX_DiscardUnknown() {
+ xxx_messageInfo_RequestFinalizeBlock.DiscardUnknown(m)
}
-var xxx_messageInfo_ResponseInitChain proto.InternalMessageInfo
+var xxx_messageInfo_RequestFinalizeBlock proto.InternalMessageInfo
-func (m *ResponseInitChain) GetConsensusParams() *types1.ConsensusParams {
+func (m *RequestFinalizeBlock) GetTxs() [][]byte {
if m != nil {
- return m.ConsensusParams
+ return m.Txs
}
return nil
}
-func (m *ResponseInitChain) GetAppHash() []byte {
+func (m *RequestFinalizeBlock) GetDecidedLastCommit() CommitInfo {
if m != nil {
- return m.AppHash
+ return m.DecidedLastCommit
}
- return nil
+ return CommitInfo{}
}
-func (m *ResponseInitChain) GetValidatorSetUpdate() ValidatorSetUpdate {
+func (m *RequestFinalizeBlock) GetByzantineValidators() []Misbehavior {
if m != nil {
- return m.ValidatorSetUpdate
+ return m.ByzantineValidators
}
- return ValidatorSetUpdate{}
+ return nil
}
-func (m *ResponseInitChain) GetNextCoreChainLockUpdate() *types1.CoreChainLock {
+func (m *RequestFinalizeBlock) GetHash() []byte {
if m != nil {
- return m.NextCoreChainLockUpdate
+ return m.Hash
}
return nil
}
-func (m *ResponseInitChain) GetInitialCoreHeight() uint32 {
+func (m *RequestFinalizeBlock) GetHeight() int64 {
if m != nil {
- return m.InitialCoreHeight
+ return m.Height
}
return 0
}
-type ResponseQuery struct {
- Code uint32 `protobuf:"varint,1,opt,name=code,proto3" json:"code,omitempty"`
- // bytes data = 2; // use "value" instead.
- Log string `protobuf:"bytes,3,opt,name=log,proto3" json:"log,omitempty"`
- Info string `protobuf:"bytes,4,opt,name=info,proto3" json:"info,omitempty"`
- Index int64 `protobuf:"varint,5,opt,name=index,proto3" json:"index,omitempty"`
- Key []byte `protobuf:"bytes,6,opt,name=key,proto3" json:"key,omitempty"`
- Value []byte `protobuf:"bytes,7,opt,name=value,proto3" json:"value,omitempty"`
- ProofOps *crypto.ProofOps `protobuf:"bytes,8,opt,name=proof_ops,json=proofOps,proto3" json:"proof_ops,omitempty"`
- Height int64 `protobuf:"varint,9,opt,name=height,proto3" json:"height,omitempty"`
- Codespace string `protobuf:"bytes,10,opt,name=codespace,proto3" json:"codespace,omitempty"`
+func (m *RequestFinalizeBlock) GetTime() time.Time {
+ if m != nil {
+ return m.Time
+ }
+ return time.Time{}
}
-func (m *ResponseQuery) Reset() { *m = ResponseQuery{} }
-func (m *ResponseQuery) String() string { return proto.CompactTextString(m) }
-func (*ResponseQuery) ProtoMessage() {}
-func (*ResponseQuery) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{21}
+func (m *RequestFinalizeBlock) GetNextValidatorsHash() []byte {
+ if m != nil {
+ return m.NextValidatorsHash
+ }
+ return nil
}
-func (m *ResponseQuery) XXX_Unmarshal(b []byte) error {
+
+func (m *RequestFinalizeBlock) GetProposerProTxHash() []byte {
+ if m != nil {
+ return m.ProposerProTxHash
+ }
+ return nil
+}
+
+type Response struct {
+ // Types that are valid to be assigned to Value:
+ // *Response_Exception
+ // *Response_Echo
+ // *Response_Flush
+ // *Response_Info
+ // *Response_InitChain
+ // *Response_Query
+ // *Response_BeginBlock
+ // *Response_CheckTx
+ // *Response_DeliverTx
+ // *Response_EndBlock
+ // *Response_Commit
+ // *Response_ListSnapshots
+ // *Response_OfferSnapshot
+ // *Response_LoadSnapshotChunk
+ // *Response_ApplySnapshotChunk
+ // *Response_PrepareProposal
+ // *Response_ProcessProposal
+ // *Response_ExtendVote
+ // *Response_VerifyVoteExtension
+ // *Response_FinalizeBlock
+ Value isResponse_Value `protobuf_oneof:"value"`
+}
+
+func (m *Response) Reset() { *m = Response{} }
+func (m *Response) String() string { return proto.CompactTextString(m) }
+func (*Response) ProtoMessage() {}
+func (*Response) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{20}
+}
+func (m *Response) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
-func (m *ResponseQuery) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+func (m *Response) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
- return xxx_messageInfo_ResponseQuery.Marshal(b, m, deterministic)
+ return xxx_messageInfo_Response.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
@@ -1731,274 +1794,299 @@ func (m *ResponseQuery) XXX_Marshal(b []byte, deterministic bool) ([]byte, error
return b[:n], nil
}
}
-func (m *ResponseQuery) XXX_Merge(src proto.Message) {
- xxx_messageInfo_ResponseQuery.Merge(m, src)
+func (m *Response) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Response.Merge(m, src)
}
-func (m *ResponseQuery) XXX_Size() int {
+func (m *Response) XXX_Size() int {
return m.Size()
}
-func (m *ResponseQuery) XXX_DiscardUnknown() {
- xxx_messageInfo_ResponseQuery.DiscardUnknown(m)
+func (m *Response) XXX_DiscardUnknown() {
+ xxx_messageInfo_Response.DiscardUnknown(m)
}
-var xxx_messageInfo_ResponseQuery proto.InternalMessageInfo
+var xxx_messageInfo_Response proto.InternalMessageInfo
-func (m *ResponseQuery) GetCode() uint32 {
- if m != nil {
- return m.Code
- }
- return 0
+type isResponse_Value interface {
+ isResponse_Value()
+ MarshalTo([]byte) (int, error)
+ Size() int
}
-func (m *ResponseQuery) GetLog() string {
- if m != nil {
- return m.Log
- }
- return ""
+type Response_Exception struct {
+ Exception *ResponseException `protobuf:"bytes,1,opt,name=exception,proto3,oneof" json:"exception,omitempty"`
+}
+type Response_Echo struct {
+ Echo *ResponseEcho `protobuf:"bytes,2,opt,name=echo,proto3,oneof" json:"echo,omitempty"`
+}
+type Response_Flush struct {
+ Flush *ResponseFlush `protobuf:"bytes,3,opt,name=flush,proto3,oneof" json:"flush,omitempty"`
+}
+type Response_Info struct {
+ Info *ResponseInfo `protobuf:"bytes,4,opt,name=info,proto3,oneof" json:"info,omitempty"`
+}
+type Response_InitChain struct {
+ InitChain *ResponseInitChain `protobuf:"bytes,5,opt,name=init_chain,json=initChain,proto3,oneof" json:"init_chain,omitempty"`
+}
+type Response_Query struct {
+ Query *ResponseQuery `protobuf:"bytes,6,opt,name=query,proto3,oneof" json:"query,omitempty"`
+}
+type Response_BeginBlock struct {
+ BeginBlock *ResponseBeginBlock `protobuf:"bytes,7,opt,name=begin_block,json=beginBlock,proto3,oneof" json:"begin_block,omitempty"`
+}
+type Response_CheckTx struct {
+ CheckTx *ResponseCheckTx `protobuf:"bytes,8,opt,name=check_tx,json=checkTx,proto3,oneof" json:"check_tx,omitempty"`
+}
+type Response_DeliverTx struct {
+ DeliverTx *ResponseDeliverTx `protobuf:"bytes,9,opt,name=deliver_tx,json=deliverTx,proto3,oneof" json:"deliver_tx,omitempty"`
+}
+type Response_EndBlock struct {
+ EndBlock *ResponseEndBlock `protobuf:"bytes,10,opt,name=end_block,json=endBlock,proto3,oneof" json:"end_block,omitempty"`
+}
+type Response_Commit struct {
+ Commit *ResponseCommit `protobuf:"bytes,11,opt,name=commit,proto3,oneof" json:"commit,omitempty"`
+}
+type Response_ListSnapshots struct {
+ ListSnapshots *ResponseListSnapshots `protobuf:"bytes,12,opt,name=list_snapshots,json=listSnapshots,proto3,oneof" json:"list_snapshots,omitempty"`
+}
+type Response_OfferSnapshot struct {
+ OfferSnapshot *ResponseOfferSnapshot `protobuf:"bytes,13,opt,name=offer_snapshot,json=offerSnapshot,proto3,oneof" json:"offer_snapshot,omitempty"`
+}
+type Response_LoadSnapshotChunk struct {
+ LoadSnapshotChunk *ResponseLoadSnapshotChunk `protobuf:"bytes,14,opt,name=load_snapshot_chunk,json=loadSnapshotChunk,proto3,oneof" json:"load_snapshot_chunk,omitempty"`
+}
+type Response_ApplySnapshotChunk struct {
+ ApplySnapshotChunk *ResponseApplySnapshotChunk `protobuf:"bytes,15,opt,name=apply_snapshot_chunk,json=applySnapshotChunk,proto3,oneof" json:"apply_snapshot_chunk,omitempty"`
}
+type Response_PrepareProposal struct {
+ PrepareProposal *ResponsePrepareProposal `protobuf:"bytes,16,opt,name=prepare_proposal,json=prepareProposal,proto3,oneof" json:"prepare_proposal,omitempty"`
+}
+type Response_ProcessProposal struct {
+ ProcessProposal *ResponseProcessProposal `protobuf:"bytes,17,opt,name=process_proposal,json=processProposal,proto3,oneof" json:"process_proposal,omitempty"`
+}
+type Response_ExtendVote struct {
+ ExtendVote *ResponseExtendVote `protobuf:"bytes,18,opt,name=extend_vote,json=extendVote,proto3,oneof" json:"extend_vote,omitempty"`
+}
+type Response_VerifyVoteExtension struct {
+ VerifyVoteExtension *ResponseVerifyVoteExtension `protobuf:"bytes,19,opt,name=verify_vote_extension,json=verifyVoteExtension,proto3,oneof" json:"verify_vote_extension,omitempty"`
+}
+type Response_FinalizeBlock struct {
+ FinalizeBlock *ResponseFinalizeBlock `protobuf:"bytes,20,opt,name=finalize_block,json=finalizeBlock,proto3,oneof" json:"finalize_block,omitempty"`
+}
+
+func (*Response_Exception) isResponse_Value() {}
+func (*Response_Echo) isResponse_Value() {}
+func (*Response_Flush) isResponse_Value() {}
+func (*Response_Info) isResponse_Value() {}
+func (*Response_InitChain) isResponse_Value() {}
+func (*Response_Query) isResponse_Value() {}
+func (*Response_BeginBlock) isResponse_Value() {}
+func (*Response_CheckTx) isResponse_Value() {}
+func (*Response_DeliverTx) isResponse_Value() {}
+func (*Response_EndBlock) isResponse_Value() {}
+func (*Response_Commit) isResponse_Value() {}
+func (*Response_ListSnapshots) isResponse_Value() {}
+func (*Response_OfferSnapshot) isResponse_Value() {}
+func (*Response_LoadSnapshotChunk) isResponse_Value() {}
+func (*Response_ApplySnapshotChunk) isResponse_Value() {}
+func (*Response_PrepareProposal) isResponse_Value() {}
+func (*Response_ProcessProposal) isResponse_Value() {}
+func (*Response_ExtendVote) isResponse_Value() {}
+func (*Response_VerifyVoteExtension) isResponse_Value() {}
+func (*Response_FinalizeBlock) isResponse_Value() {}
-func (m *ResponseQuery) GetInfo() string {
+func (m *Response) GetValue() isResponse_Value {
if m != nil {
- return m.Info
+ return m.Value
}
- return ""
+ return nil
}
-func (m *ResponseQuery) GetIndex() int64 {
- if m != nil {
- return m.Index
+func (m *Response) GetException() *ResponseException {
+ if x, ok := m.GetValue().(*Response_Exception); ok {
+ return x.Exception
}
- return 0
+ return nil
}
-func (m *ResponseQuery) GetKey() []byte {
- if m != nil {
- return m.Key
+func (m *Response) GetEcho() *ResponseEcho {
+ if x, ok := m.GetValue().(*Response_Echo); ok {
+ return x.Echo
}
return nil
}
-func (m *ResponseQuery) GetValue() []byte {
- if m != nil {
- return m.Value
+func (m *Response) GetFlush() *ResponseFlush {
+ if x, ok := m.GetValue().(*Response_Flush); ok {
+ return x.Flush
}
return nil
}
-func (m *ResponseQuery) GetProofOps() *crypto.ProofOps {
- if m != nil {
- return m.ProofOps
+func (m *Response) GetInfo() *ResponseInfo {
+ if x, ok := m.GetValue().(*Response_Info); ok {
+ return x.Info
}
return nil
}
-func (m *ResponseQuery) GetHeight() int64 {
- if m != nil {
- return m.Height
+func (m *Response) GetInitChain() *ResponseInitChain {
+ if x, ok := m.GetValue().(*Response_InitChain); ok {
+ return x.InitChain
}
- return 0
+ return nil
}
-func (m *ResponseQuery) GetCodespace() string {
- if m != nil {
- return m.Codespace
+func (m *Response) GetQuery() *ResponseQuery {
+ if x, ok := m.GetValue().(*Response_Query); ok {
+ return x.Query
}
- return ""
+ return nil
}
-type ResponseBeginBlock struct {
- Events []Event `protobuf:"bytes,1,rep,name=events,proto3" json:"events,omitempty"`
+// Deprecated: Do not use.
+func (m *Response) GetBeginBlock() *ResponseBeginBlock {
+ if x, ok := m.GetValue().(*Response_BeginBlock); ok {
+ return x.BeginBlock
+ }
+ return nil
}
-func (m *ResponseBeginBlock) Reset() { *m = ResponseBeginBlock{} }
-func (m *ResponseBeginBlock) String() string { return proto.CompactTextString(m) }
-func (*ResponseBeginBlock) ProtoMessage() {}
-func (*ResponseBeginBlock) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{22}
-}
-func (m *ResponseBeginBlock) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *ResponseBeginBlock) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_ResponseBeginBlock.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
+func (m *Response) GetCheckTx() *ResponseCheckTx {
+ if x, ok := m.GetValue().(*Response_CheckTx); ok {
+ return x.CheckTx
}
+ return nil
}
-func (m *ResponseBeginBlock) XXX_Merge(src proto.Message) {
- xxx_messageInfo_ResponseBeginBlock.Merge(m, src)
-}
-func (m *ResponseBeginBlock) XXX_Size() int {
- return m.Size()
-}
-func (m *ResponseBeginBlock) XXX_DiscardUnknown() {
- xxx_messageInfo_ResponseBeginBlock.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_ResponseBeginBlock proto.InternalMessageInfo
-func (m *ResponseBeginBlock) GetEvents() []Event {
- if m != nil {
- return m.Events
+// Deprecated: Do not use.
+func (m *Response) GetDeliverTx() *ResponseDeliverTx {
+ if x, ok := m.GetValue().(*Response_DeliverTx); ok {
+ return x.DeliverTx
}
return nil
}
-type ResponseCheckTx struct {
- Code uint32 `protobuf:"varint,1,opt,name=code,proto3" json:"code,omitempty"`
- Data []byte `protobuf:"bytes,2,opt,name=data,proto3" json:"data,omitempty"`
- Log string `protobuf:"bytes,3,opt,name=log,proto3" json:"log,omitempty"`
- Info string `protobuf:"bytes,4,opt,name=info,proto3" json:"info,omitempty"`
- GasWanted int64 `protobuf:"varint,5,opt,name=gas_wanted,proto3" json:"gas_wanted,omitempty"`
- GasUsed int64 `protobuf:"varint,6,opt,name=gas_used,proto3" json:"gas_used,omitempty"`
- Events []Event `protobuf:"bytes,7,rep,name=events,proto3" json:"events,omitempty"`
- Codespace string `protobuf:"bytes,8,opt,name=codespace,proto3" json:"codespace,omitempty"`
- Sender string `protobuf:"bytes,9,opt,name=sender,proto3" json:"sender,omitempty"`
- Priority int64 `protobuf:"varint,10,opt,name=priority,proto3" json:"priority,omitempty"`
- // mempool_error is set by Tendermint.
- // ABCI applictions creating a ResponseCheckTX should not set mempool_error.
- MempoolError string `protobuf:"bytes,11,opt,name=mempool_error,json=mempoolError,proto3" json:"mempool_error,omitempty"`
+// Deprecated: Do not use.
+func (m *Response) GetEndBlock() *ResponseEndBlock {
+ if x, ok := m.GetValue().(*Response_EndBlock); ok {
+ return x.EndBlock
+ }
+ return nil
}
-func (m *ResponseCheckTx) Reset() { *m = ResponseCheckTx{} }
-func (m *ResponseCheckTx) String() string { return proto.CompactTextString(m) }
-func (*ResponseCheckTx) ProtoMessage() {}
-func (*ResponseCheckTx) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{23}
-}
-func (m *ResponseCheckTx) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *ResponseCheckTx) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_ResponseCheckTx.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
+func (m *Response) GetCommit() *ResponseCommit {
+ if x, ok := m.GetValue().(*Response_Commit); ok {
+ return x.Commit
}
+ return nil
}
-func (m *ResponseCheckTx) XXX_Merge(src proto.Message) {
- xxx_messageInfo_ResponseCheckTx.Merge(m, src)
-}
-func (m *ResponseCheckTx) XXX_Size() int {
- return m.Size()
-}
-func (m *ResponseCheckTx) XXX_DiscardUnknown() {
- xxx_messageInfo_ResponseCheckTx.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_ResponseCheckTx proto.InternalMessageInfo
-func (m *ResponseCheckTx) GetCode() uint32 {
- if m != nil {
- return m.Code
+func (m *Response) GetListSnapshots() *ResponseListSnapshots {
+ if x, ok := m.GetValue().(*Response_ListSnapshots); ok {
+ return x.ListSnapshots
}
- return 0
+ return nil
}
-func (m *ResponseCheckTx) GetData() []byte {
- if m != nil {
- return m.Data
+func (m *Response) GetOfferSnapshot() *ResponseOfferSnapshot {
+ if x, ok := m.GetValue().(*Response_OfferSnapshot); ok {
+ return x.OfferSnapshot
}
return nil
}
-func (m *ResponseCheckTx) GetLog() string {
- if m != nil {
- return m.Log
+func (m *Response) GetLoadSnapshotChunk() *ResponseLoadSnapshotChunk {
+ if x, ok := m.GetValue().(*Response_LoadSnapshotChunk); ok {
+ return x.LoadSnapshotChunk
}
- return ""
+ return nil
}
-func (m *ResponseCheckTx) GetInfo() string {
- if m != nil {
- return m.Info
+func (m *Response) GetApplySnapshotChunk() *ResponseApplySnapshotChunk {
+ if x, ok := m.GetValue().(*Response_ApplySnapshotChunk); ok {
+ return x.ApplySnapshotChunk
}
- return ""
+ return nil
}
-func (m *ResponseCheckTx) GetGasWanted() int64 {
- if m != nil {
- return m.GasWanted
+func (m *Response) GetPrepareProposal() *ResponsePrepareProposal {
+ if x, ok := m.GetValue().(*Response_PrepareProposal); ok {
+ return x.PrepareProposal
}
- return 0
+ return nil
}
-func (m *ResponseCheckTx) GetGasUsed() int64 {
- if m != nil {
- return m.GasUsed
+func (m *Response) GetProcessProposal() *ResponseProcessProposal {
+ if x, ok := m.GetValue().(*Response_ProcessProposal); ok {
+ return x.ProcessProposal
}
- return 0
+ return nil
}
-func (m *ResponseCheckTx) GetEvents() []Event {
- if m != nil {
- return m.Events
+func (m *Response) GetExtendVote() *ResponseExtendVote {
+ if x, ok := m.GetValue().(*Response_ExtendVote); ok {
+ return x.ExtendVote
}
return nil
}
-func (m *ResponseCheckTx) GetCodespace() string {
- if m != nil {
- return m.Codespace
- }
- return ""
-}
-
-func (m *ResponseCheckTx) GetSender() string {
- if m != nil {
- return m.Sender
+func (m *Response) GetVerifyVoteExtension() *ResponseVerifyVoteExtension {
+ if x, ok := m.GetValue().(*Response_VerifyVoteExtension); ok {
+ return x.VerifyVoteExtension
}
- return ""
+ return nil
}
-func (m *ResponseCheckTx) GetPriority() int64 {
- if m != nil {
- return m.Priority
+func (m *Response) GetFinalizeBlock() *ResponseFinalizeBlock {
+ if x, ok := m.GetValue().(*Response_FinalizeBlock); ok {
+ return x.FinalizeBlock
}
- return 0
+ return nil
}
-func (m *ResponseCheckTx) GetMempoolError() string {
- if m != nil {
- return m.MempoolError
+// XXX_OneofWrappers is for the internal use of the proto package.
+func (*Response) XXX_OneofWrappers() []interface{} {
+ return []interface{}{
+ (*Response_Exception)(nil),
+ (*Response_Echo)(nil),
+ (*Response_Flush)(nil),
+ (*Response_Info)(nil),
+ (*Response_InitChain)(nil),
+ (*Response_Query)(nil),
+ (*Response_BeginBlock)(nil),
+ (*Response_CheckTx)(nil),
+ (*Response_DeliverTx)(nil),
+ (*Response_EndBlock)(nil),
+ (*Response_Commit)(nil),
+ (*Response_ListSnapshots)(nil),
+ (*Response_OfferSnapshot)(nil),
+ (*Response_LoadSnapshotChunk)(nil),
+ (*Response_ApplySnapshotChunk)(nil),
+ (*Response_PrepareProposal)(nil),
+ (*Response_ProcessProposal)(nil),
+ (*Response_ExtendVote)(nil),
+ (*Response_VerifyVoteExtension)(nil),
+ (*Response_FinalizeBlock)(nil),
}
- return ""
}
-type ResponseDeliverTx struct {
- Code uint32 `protobuf:"varint,1,opt,name=code,proto3" json:"code,omitempty"`
- Data []byte `protobuf:"bytes,2,opt,name=data,proto3" json:"data,omitempty"`
- Log string `protobuf:"bytes,3,opt,name=log,proto3" json:"log,omitempty"`
- Info string `protobuf:"bytes,4,opt,name=info,proto3" json:"info,omitempty"`
- GasWanted int64 `protobuf:"varint,5,opt,name=gas_wanted,proto3" json:"gas_wanted,omitempty"`
- GasUsed int64 `protobuf:"varint,6,opt,name=gas_used,proto3" json:"gas_used,omitempty"`
- Events []Event `protobuf:"bytes,7,rep,name=events,proto3" json:"events,omitempty"`
- Codespace string `protobuf:"bytes,8,opt,name=codespace,proto3" json:"codespace,omitempty"`
+// nondeterministic
+type ResponseException struct {
+ Error string `protobuf:"bytes,1,opt,name=error,proto3" json:"error,omitempty"`
}
-func (m *ResponseDeliverTx) Reset() { *m = ResponseDeliverTx{} }
-func (m *ResponseDeliverTx) String() string { return proto.CompactTextString(m) }
-func (*ResponseDeliverTx) ProtoMessage() {}
-func (*ResponseDeliverTx) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{24}
+func (m *ResponseException) Reset() { *m = ResponseException{} }
+func (m *ResponseException) String() string { return proto.CompactTextString(m) }
+func (*ResponseException) ProtoMessage() {}
+func (*ResponseException) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{21}
}
-func (m *ResponseDeliverTx) XXX_Unmarshal(b []byte) error {
+func (m *ResponseException) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
-func (m *ResponseDeliverTx) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+func (m *ResponseException) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
- return xxx_messageInfo_ResponseDeliverTx.Marshal(b, m, deterministic)
+ return xxx_messageInfo_ResponseException.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
@@ -2008,93 +2096,84 @@ func (m *ResponseDeliverTx) XXX_Marshal(b []byte, deterministic bool) ([]byte, e
return b[:n], nil
}
}
-func (m *ResponseDeliverTx) XXX_Merge(src proto.Message) {
- xxx_messageInfo_ResponseDeliverTx.Merge(m, src)
+func (m *ResponseException) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ResponseException.Merge(m, src)
}
-func (m *ResponseDeliverTx) XXX_Size() int {
+func (m *ResponseException) XXX_Size() int {
return m.Size()
}
-func (m *ResponseDeliverTx) XXX_DiscardUnknown() {
- xxx_messageInfo_ResponseDeliverTx.DiscardUnknown(m)
+func (m *ResponseException) XXX_DiscardUnknown() {
+ xxx_messageInfo_ResponseException.DiscardUnknown(m)
}
-var xxx_messageInfo_ResponseDeliverTx proto.InternalMessageInfo
+var xxx_messageInfo_ResponseException proto.InternalMessageInfo
-func (m *ResponseDeliverTx) GetCode() uint32 {
+func (m *ResponseException) GetError() string {
if m != nil {
- return m.Code
+ return m.Error
}
- return 0
+ return ""
}
-func (m *ResponseDeliverTx) GetData() []byte {
- if m != nil {
- return m.Data
- }
- return nil
+type ResponseEcho struct {
+ Message string `protobuf:"bytes,1,opt,name=message,proto3" json:"message,omitempty"`
}
-func (m *ResponseDeliverTx) GetLog() string {
- if m != nil {
- return m.Log
- }
- return ""
+func (m *ResponseEcho) Reset() { *m = ResponseEcho{} }
+func (m *ResponseEcho) String() string { return proto.CompactTextString(m) }
+func (*ResponseEcho) ProtoMessage() {}
+func (*ResponseEcho) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{22}
}
-
-func (m *ResponseDeliverTx) GetInfo() string {
- if m != nil {
- return m.Info
- }
- return ""
+func (m *ResponseEcho) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
}
-
-func (m *ResponseDeliverTx) GetGasWanted() int64 {
- if m != nil {
- return m.GasWanted
+func (m *ResponseEcho) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_ResponseEcho.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
}
- return 0
}
-
-func (m *ResponseDeliverTx) GetGasUsed() int64 {
- if m != nil {
- return m.GasUsed
- }
- return 0
+func (m *ResponseEcho) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ResponseEcho.Merge(m, src)
}
-
-func (m *ResponseDeliverTx) GetEvents() []Event {
- if m != nil {
- return m.Events
- }
- return nil
+func (m *ResponseEcho) XXX_Size() int {
+ return m.Size()
+}
+func (m *ResponseEcho) XXX_DiscardUnknown() {
+ xxx_messageInfo_ResponseEcho.DiscardUnknown(m)
}
-func (m *ResponseDeliverTx) GetCodespace() string {
+var xxx_messageInfo_ResponseEcho proto.InternalMessageInfo
+
+func (m *ResponseEcho) GetMessage() string {
if m != nil {
- return m.Codespace
+ return m.Message
}
return ""
}
-type ResponseEndBlock struct {
- ConsensusParamUpdates *types1.ConsensusParams `protobuf:"bytes,2,opt,name=consensus_param_updates,json=consensusParamUpdates,proto3" json:"consensus_param_updates,omitempty"`
- Events []Event `protobuf:"bytes,3,rep,name=events,proto3" json:"events,omitempty"`
- NextCoreChainLockUpdate *types1.CoreChainLock `protobuf:"bytes,100,opt,name=next_core_chain_lock_update,json=nextCoreChainLockUpdate,proto3" json:"next_core_chain_lock_update,omitempty"`
- ValidatorSetUpdate *ValidatorSetUpdate `protobuf:"bytes,101,opt,name=validator_set_update,json=validatorSetUpdate,proto3" json:"validator_set_update,omitempty"`
+type ResponseFlush struct {
}
-func (m *ResponseEndBlock) Reset() { *m = ResponseEndBlock{} }
-func (m *ResponseEndBlock) String() string { return proto.CompactTextString(m) }
-func (*ResponseEndBlock) ProtoMessage() {}
-func (*ResponseEndBlock) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{25}
+func (m *ResponseFlush) Reset() { *m = ResponseFlush{} }
+func (m *ResponseFlush) String() string { return proto.CompactTextString(m) }
+func (*ResponseFlush) ProtoMessage() {}
+func (*ResponseFlush) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{23}
}
-func (m *ResponseEndBlock) XXX_Unmarshal(b []byte) error {
+func (m *ResponseFlush) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
-func (m *ResponseEndBlock) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+func (m *ResponseFlush) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
- return xxx_messageInfo_ResponseEndBlock.Marshal(b, m, deterministic)
+ return xxx_messageInfo_ResponseFlush.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
@@ -2104,64 +2183,39 @@ func (m *ResponseEndBlock) XXX_Marshal(b []byte, deterministic bool) ([]byte, er
return b[:n], nil
}
}
-func (m *ResponseEndBlock) XXX_Merge(src proto.Message) {
- xxx_messageInfo_ResponseEndBlock.Merge(m, src)
+func (m *ResponseFlush) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ResponseFlush.Merge(m, src)
}
-func (m *ResponseEndBlock) XXX_Size() int {
+func (m *ResponseFlush) XXX_Size() int {
return m.Size()
}
-func (m *ResponseEndBlock) XXX_DiscardUnknown() {
- xxx_messageInfo_ResponseEndBlock.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_ResponseEndBlock proto.InternalMessageInfo
-
-func (m *ResponseEndBlock) GetConsensusParamUpdates() *types1.ConsensusParams {
- if m != nil {
- return m.ConsensusParamUpdates
- }
- return nil
-}
-
-func (m *ResponseEndBlock) GetEvents() []Event {
- if m != nil {
- return m.Events
- }
- return nil
-}
-
-func (m *ResponseEndBlock) GetNextCoreChainLockUpdate() *types1.CoreChainLock {
- if m != nil {
- return m.NextCoreChainLockUpdate
- }
- return nil
+func (m *ResponseFlush) XXX_DiscardUnknown() {
+ xxx_messageInfo_ResponseFlush.DiscardUnknown(m)
}
-func (m *ResponseEndBlock) GetValidatorSetUpdate() *ValidatorSetUpdate {
- if m != nil {
- return m.ValidatorSetUpdate
- }
- return nil
-}
+var xxx_messageInfo_ResponseFlush proto.InternalMessageInfo
-type ResponseCommit struct {
- // reserve 1
- Data []byte `protobuf:"bytes,2,opt,name=data,proto3" json:"data,omitempty"`
- RetainHeight int64 `protobuf:"varint,3,opt,name=retain_height,json=retainHeight,proto3" json:"retain_height,omitempty"`
+type ResponseInfo struct {
+ Data string `protobuf:"bytes,1,opt,name=data,proto3" json:"data,omitempty"`
+ // this is the software version of the application. TODO: remove?
+ Version string `protobuf:"bytes,2,opt,name=version,proto3" json:"version,omitempty"`
+ AppVersion uint64 `protobuf:"varint,3,opt,name=app_version,json=appVersion,proto3" json:"app_version,omitempty"`
+ LastBlockHeight int64 `protobuf:"varint,4,opt,name=last_block_height,json=lastBlockHeight,proto3" json:"last_block_height,omitempty"`
+ LastBlockAppHash []byte `protobuf:"bytes,5,opt,name=last_block_app_hash,json=lastBlockAppHash,proto3" json:"last_block_app_hash,omitempty"`
}
-func (m *ResponseCommit) Reset() { *m = ResponseCommit{} }
-func (m *ResponseCommit) String() string { return proto.CompactTextString(m) }
-func (*ResponseCommit) ProtoMessage() {}
-func (*ResponseCommit) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{26}
+func (m *ResponseInfo) Reset() { *m = ResponseInfo{} }
+func (m *ResponseInfo) String() string { return proto.CompactTextString(m) }
+func (*ResponseInfo) ProtoMessage() {}
+func (*ResponseInfo) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{24}
}
-func (m *ResponseCommit) XXX_Unmarshal(b []byte) error {
+func (m *ResponseInfo) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
-func (m *ResponseCommit) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+func (m *ResponseInfo) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
- return xxx_messageInfo_ResponseCommit.Marshal(b, m, deterministic)
+ return xxx_messageInfo_ResponseInfo.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
@@ -2171,92 +2225,73 @@ func (m *ResponseCommit) XXX_Marshal(b []byte, deterministic bool) ([]byte, erro
return b[:n], nil
}
}
-func (m *ResponseCommit) XXX_Merge(src proto.Message) {
- xxx_messageInfo_ResponseCommit.Merge(m, src)
+func (m *ResponseInfo) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ResponseInfo.Merge(m, src)
}
-func (m *ResponseCommit) XXX_Size() int {
+func (m *ResponseInfo) XXX_Size() int {
return m.Size()
}
-func (m *ResponseCommit) XXX_DiscardUnknown() {
- xxx_messageInfo_ResponseCommit.DiscardUnknown(m)
+func (m *ResponseInfo) XXX_DiscardUnknown() {
+ xxx_messageInfo_ResponseInfo.DiscardUnknown(m)
}
-var xxx_messageInfo_ResponseCommit proto.InternalMessageInfo
+var xxx_messageInfo_ResponseInfo proto.InternalMessageInfo
-func (m *ResponseCommit) GetData() []byte {
+func (m *ResponseInfo) GetData() string {
if m != nil {
return m.Data
}
- return nil
+ return ""
}
-func (m *ResponseCommit) GetRetainHeight() int64 {
+func (m *ResponseInfo) GetVersion() string {
if m != nil {
- return m.RetainHeight
+ return m.Version
}
- return 0
+ return ""
}
-type ResponseListSnapshots struct {
- Snapshots []*Snapshot `protobuf:"bytes,1,rep,name=snapshots,proto3" json:"snapshots,omitempty"`
+func (m *ResponseInfo) GetAppVersion() uint64 {
+ if m != nil {
+ return m.AppVersion
+ }
+ return 0
}
-func (m *ResponseListSnapshots) Reset() { *m = ResponseListSnapshots{} }
-func (m *ResponseListSnapshots) String() string { return proto.CompactTextString(m) }
-func (*ResponseListSnapshots) ProtoMessage() {}
-func (*ResponseListSnapshots) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{27}
-}
-func (m *ResponseListSnapshots) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *ResponseListSnapshots) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_ResponseListSnapshots.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
+func (m *ResponseInfo) GetLastBlockHeight() int64 {
+ if m != nil {
+ return m.LastBlockHeight
}
+ return 0
}
-func (m *ResponseListSnapshots) XXX_Merge(src proto.Message) {
- xxx_messageInfo_ResponseListSnapshots.Merge(m, src)
-}
-func (m *ResponseListSnapshots) XXX_Size() int {
- return m.Size()
-}
-func (m *ResponseListSnapshots) XXX_DiscardUnknown() {
- xxx_messageInfo_ResponseListSnapshots.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_ResponseListSnapshots proto.InternalMessageInfo
-func (m *ResponseListSnapshots) GetSnapshots() []*Snapshot {
+func (m *ResponseInfo) GetLastBlockAppHash() []byte {
if m != nil {
- return m.Snapshots
+ return m.LastBlockAppHash
}
return nil
}
-type ResponseOfferSnapshot struct {
- Result ResponseOfferSnapshot_Result `protobuf:"varint,1,opt,name=result,proto3,enum=tendermint.abci.ResponseOfferSnapshot_Result" json:"result,omitempty"`
+type ResponseInitChain struct {
+ ConsensusParams *types1.ConsensusParams `protobuf:"bytes,1,opt,name=consensus_params,json=consensusParams,proto3" json:"consensus_params,omitempty"`
+ AppHash []byte `protobuf:"bytes,3,opt,name=app_hash,json=appHash,proto3" json:"app_hash,omitempty"`
+ ValidatorSetUpdate ValidatorSetUpdate `protobuf:"bytes,100,opt,name=validator_set_update,json=validatorSetUpdate,proto3" json:"validator_set_update"`
+ NextCoreChainLockUpdate *types1.CoreChainLock `protobuf:"bytes,101,opt,name=next_core_chain_lock_update,json=nextCoreChainLockUpdate,proto3" json:"next_core_chain_lock_update,omitempty"`
+ InitialCoreHeight uint32 `protobuf:"varint,102,opt,name=initial_core_height,json=initialCoreHeight,proto3" json:"initial_core_height,omitempty"`
}
-func (m *ResponseOfferSnapshot) Reset() { *m = ResponseOfferSnapshot{} }
-func (m *ResponseOfferSnapshot) String() string { return proto.CompactTextString(m) }
-func (*ResponseOfferSnapshot) ProtoMessage() {}
-func (*ResponseOfferSnapshot) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{28}
+func (m *ResponseInitChain) Reset() { *m = ResponseInitChain{} }
+func (m *ResponseInitChain) String() string { return proto.CompactTextString(m) }
+func (*ResponseInitChain) ProtoMessage() {}
+func (*ResponseInitChain) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{25}
}
-func (m *ResponseOfferSnapshot) XXX_Unmarshal(b []byte) error {
+func (m *ResponseInitChain) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
-func (m *ResponseOfferSnapshot) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+func (m *ResponseInitChain) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
- return xxx_messageInfo_ResponseOfferSnapshot.Marshal(b, m, deterministic)
+ return xxx_messageInfo_ResponseInitChain.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
@@ -2266,87 +2301,78 @@ func (m *ResponseOfferSnapshot) XXX_Marshal(b []byte, deterministic bool) ([]byt
return b[:n], nil
}
}
-func (m *ResponseOfferSnapshot) XXX_Merge(src proto.Message) {
- xxx_messageInfo_ResponseOfferSnapshot.Merge(m, src)
+func (m *ResponseInitChain) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ResponseInitChain.Merge(m, src)
}
-func (m *ResponseOfferSnapshot) XXX_Size() int {
+func (m *ResponseInitChain) XXX_Size() int {
return m.Size()
}
-func (m *ResponseOfferSnapshot) XXX_DiscardUnknown() {
- xxx_messageInfo_ResponseOfferSnapshot.DiscardUnknown(m)
+func (m *ResponseInitChain) XXX_DiscardUnknown() {
+ xxx_messageInfo_ResponseInitChain.DiscardUnknown(m)
}
-var xxx_messageInfo_ResponseOfferSnapshot proto.InternalMessageInfo
+var xxx_messageInfo_ResponseInitChain proto.InternalMessageInfo
-func (m *ResponseOfferSnapshot) GetResult() ResponseOfferSnapshot_Result {
+func (m *ResponseInitChain) GetConsensusParams() *types1.ConsensusParams {
if m != nil {
- return m.Result
+ return m.ConsensusParams
}
- return ResponseOfferSnapshot_UNKNOWN
+ return nil
}
-type ResponseLoadSnapshotChunk struct {
- Chunk []byte `protobuf:"bytes,1,opt,name=chunk,proto3" json:"chunk,omitempty"`
+func (m *ResponseInitChain) GetAppHash() []byte {
+ if m != nil {
+ return m.AppHash
+ }
+ return nil
}
-func (m *ResponseLoadSnapshotChunk) Reset() { *m = ResponseLoadSnapshotChunk{} }
-func (m *ResponseLoadSnapshotChunk) String() string { return proto.CompactTextString(m) }
-func (*ResponseLoadSnapshotChunk) ProtoMessage() {}
-func (*ResponseLoadSnapshotChunk) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{29}
-}
-func (m *ResponseLoadSnapshotChunk) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *ResponseLoadSnapshotChunk) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_ResponseLoadSnapshotChunk.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
+func (m *ResponseInitChain) GetValidatorSetUpdate() ValidatorSetUpdate {
+ if m != nil {
+ return m.ValidatorSetUpdate
}
+ return ValidatorSetUpdate{}
}
-func (m *ResponseLoadSnapshotChunk) XXX_Merge(src proto.Message) {
- xxx_messageInfo_ResponseLoadSnapshotChunk.Merge(m, src)
-}
-func (m *ResponseLoadSnapshotChunk) XXX_Size() int {
- return m.Size()
-}
-func (m *ResponseLoadSnapshotChunk) XXX_DiscardUnknown() {
- xxx_messageInfo_ResponseLoadSnapshotChunk.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_ResponseLoadSnapshotChunk proto.InternalMessageInfo
-func (m *ResponseLoadSnapshotChunk) GetChunk() []byte {
+func (m *ResponseInitChain) GetNextCoreChainLockUpdate() *types1.CoreChainLock {
if m != nil {
- return m.Chunk
+ return m.NextCoreChainLockUpdate
}
return nil
}
-type ResponseApplySnapshotChunk struct {
- Result ResponseApplySnapshotChunk_Result `protobuf:"varint,1,opt,name=result,proto3,enum=tendermint.abci.ResponseApplySnapshotChunk_Result" json:"result,omitempty"`
- RefetchChunks []uint32 `protobuf:"varint,2,rep,packed,name=refetch_chunks,json=refetchChunks,proto3" json:"refetch_chunks,omitempty"`
- RejectSenders []string `protobuf:"bytes,3,rep,name=reject_senders,json=rejectSenders,proto3" json:"reject_senders,omitempty"`
+func (m *ResponseInitChain) GetInitialCoreHeight() uint32 {
+ if m != nil {
+ return m.InitialCoreHeight
+ }
+ return 0
}
-func (m *ResponseApplySnapshotChunk) Reset() { *m = ResponseApplySnapshotChunk{} }
-func (m *ResponseApplySnapshotChunk) String() string { return proto.CompactTextString(m) }
-func (*ResponseApplySnapshotChunk) ProtoMessage() {}
-func (*ResponseApplySnapshotChunk) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{30}
+type ResponseQuery struct {
+ Code uint32 `protobuf:"varint,1,opt,name=code,proto3" json:"code,omitempty"`
+ // bytes data = 2; // use "value" instead.
+ Log string `protobuf:"bytes,3,opt,name=log,proto3" json:"log,omitempty"`
+ Info string `protobuf:"bytes,4,opt,name=info,proto3" json:"info,omitempty"`
+ Index int64 `protobuf:"varint,5,opt,name=index,proto3" json:"index,omitempty"`
+ Key []byte `protobuf:"bytes,6,opt,name=key,proto3" json:"key,omitempty"`
+ Value []byte `protobuf:"bytes,7,opt,name=value,proto3" json:"value,omitempty"`
+ ProofOps *crypto.ProofOps `protobuf:"bytes,8,opt,name=proof_ops,json=proofOps,proto3" json:"proof_ops,omitempty"`
+ Height int64 `protobuf:"varint,9,opt,name=height,proto3" json:"height,omitempty"`
+ Codespace string `protobuf:"bytes,10,opt,name=codespace,proto3" json:"codespace,omitempty"`
}
-func (m *ResponseApplySnapshotChunk) XXX_Unmarshal(b []byte) error {
+
+func (m *ResponseQuery) Reset() { *m = ResponseQuery{} }
+func (m *ResponseQuery) String() string { return proto.CompactTextString(m) }
+func (*ResponseQuery) ProtoMessage() {}
+func (*ResponseQuery) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{26}
+}
+func (m *ResponseQuery) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
-func (m *ResponseApplySnapshotChunk) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+func (m *ResponseQuery) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
- return xxx_messageInfo_ResponseApplySnapshotChunk.Marshal(b, m, deterministic)
+ return xxx_messageInfo_ResponseQuery.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
@@ -2356,127 +2382,97 @@ func (m *ResponseApplySnapshotChunk) XXX_Marshal(b []byte, deterministic bool) (
return b[:n], nil
}
}
-func (m *ResponseApplySnapshotChunk) XXX_Merge(src proto.Message) {
- xxx_messageInfo_ResponseApplySnapshotChunk.Merge(m, src)
+func (m *ResponseQuery) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ResponseQuery.Merge(m, src)
}
-func (m *ResponseApplySnapshotChunk) XXX_Size() int {
+func (m *ResponseQuery) XXX_Size() int {
return m.Size()
}
-func (m *ResponseApplySnapshotChunk) XXX_DiscardUnknown() {
- xxx_messageInfo_ResponseApplySnapshotChunk.DiscardUnknown(m)
+func (m *ResponseQuery) XXX_DiscardUnknown() {
+ xxx_messageInfo_ResponseQuery.DiscardUnknown(m)
}
-var xxx_messageInfo_ResponseApplySnapshotChunk proto.InternalMessageInfo
+var xxx_messageInfo_ResponseQuery proto.InternalMessageInfo
-func (m *ResponseApplySnapshotChunk) GetResult() ResponseApplySnapshotChunk_Result {
+func (m *ResponseQuery) GetCode() uint32 {
if m != nil {
- return m.Result
+ return m.Code
}
- return ResponseApplySnapshotChunk_UNKNOWN
+ return 0
}
-func (m *ResponseApplySnapshotChunk) GetRefetchChunks() []uint32 {
+func (m *ResponseQuery) GetLog() string {
if m != nil {
- return m.RefetchChunks
+ return m.Log
}
- return nil
+ return ""
}
-func (m *ResponseApplySnapshotChunk) GetRejectSenders() []string {
+func (m *ResponseQuery) GetInfo() string {
if m != nil {
- return m.RejectSenders
- }
- return nil
-}
-
-type LastCommitInfo struct {
- Round int32 `protobuf:"varint,1,opt,name=round,proto3" json:"round,omitempty"`
- QuorumHash []byte `protobuf:"bytes,3,opt,name=quorum_hash,json=quorumHash,proto3" json:"quorum_hash,omitempty"`
- BlockSignature []byte `protobuf:"bytes,4,opt,name=block_signature,json=blockSignature,proto3" json:"block_signature,omitempty"`
- StateSignature []byte `protobuf:"bytes,5,opt,name=state_signature,json=stateSignature,proto3" json:"state_signature,omitempty"`
-}
-
-func (m *LastCommitInfo) Reset() { *m = LastCommitInfo{} }
-func (m *LastCommitInfo) String() string { return proto.CompactTextString(m) }
-func (*LastCommitInfo) ProtoMessage() {}
-func (*LastCommitInfo) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{31}
-}
-func (m *LastCommitInfo) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *LastCommitInfo) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_LastCommitInfo.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
+ return m.Info
}
+ return ""
}
-func (m *LastCommitInfo) XXX_Merge(src proto.Message) {
- xxx_messageInfo_LastCommitInfo.Merge(m, src)
-}
-func (m *LastCommitInfo) XXX_Size() int {
- return m.Size()
-}
-func (m *LastCommitInfo) XXX_DiscardUnknown() {
- xxx_messageInfo_LastCommitInfo.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_LastCommitInfo proto.InternalMessageInfo
-func (m *LastCommitInfo) GetRound() int32 {
+func (m *ResponseQuery) GetIndex() int64 {
if m != nil {
- return m.Round
+ return m.Index
}
return 0
}
-func (m *LastCommitInfo) GetQuorumHash() []byte {
+func (m *ResponseQuery) GetKey() []byte {
if m != nil {
- return m.QuorumHash
+ return m.Key
}
return nil
}
-func (m *LastCommitInfo) GetBlockSignature() []byte {
+func (m *ResponseQuery) GetValue() []byte {
if m != nil {
- return m.BlockSignature
+ return m.Value
}
return nil
}
-func (m *LastCommitInfo) GetStateSignature() []byte {
+func (m *ResponseQuery) GetProofOps() *crypto.ProofOps {
if m != nil {
- return m.StateSignature
+ return m.ProofOps
}
return nil
}
-// Event allows application developers to attach additional information to
-// ResponseBeginBlock, ResponseEndBlock, ResponseCheckTx and ResponseDeliverTx.
-// Later, transactions may be queried using these events.
-type Event struct {
- Type string `protobuf:"bytes,1,opt,name=type,proto3" json:"type,omitempty"`
- Attributes []EventAttribute `protobuf:"bytes,2,rep,name=attributes,proto3" json:"attributes,omitempty"`
+func (m *ResponseQuery) GetHeight() int64 {
+ if m != nil {
+ return m.Height
+ }
+ return 0
}
-func (m *Event) Reset() { *m = Event{} }
-func (m *Event) String() string { return proto.CompactTextString(m) }
-func (*Event) ProtoMessage() {}
-func (*Event) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{32}
+func (m *ResponseQuery) GetCodespace() string {
+ if m != nil {
+ return m.Codespace
+ }
+ return ""
}
-func (m *Event) XXX_Unmarshal(b []byte) error {
+
+type ResponseBeginBlock struct {
+ Events []Event `protobuf:"bytes,1,rep,name=events,proto3" json:"events,omitempty"`
+}
+
+func (m *ResponseBeginBlock) Reset() { *m = ResponseBeginBlock{} }
+func (m *ResponseBeginBlock) String() string { return proto.CompactTextString(m) }
+func (*ResponseBeginBlock) ProtoMessage() {}
+func (*ResponseBeginBlock) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{27}
+}
+func (m *ResponseBeginBlock) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
-func (m *Event) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+func (m *ResponseBeginBlock) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
- return xxx_messageInfo_Event.Marshal(b, m, deterministic)
+ return xxx_messageInfo_ResponseBeginBlock.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
@@ -2486,51 +2482,52 @@ func (m *Event) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return b[:n], nil
}
}
-func (m *Event) XXX_Merge(src proto.Message) {
- xxx_messageInfo_Event.Merge(m, src)
+func (m *ResponseBeginBlock) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ResponseBeginBlock.Merge(m, src)
}
-func (m *Event) XXX_Size() int {
+func (m *ResponseBeginBlock) XXX_Size() int {
return m.Size()
}
-func (m *Event) XXX_DiscardUnknown() {
- xxx_messageInfo_Event.DiscardUnknown(m)
+func (m *ResponseBeginBlock) XXX_DiscardUnknown() {
+ xxx_messageInfo_ResponseBeginBlock.DiscardUnknown(m)
}
-var xxx_messageInfo_Event proto.InternalMessageInfo
-
-func (m *Event) GetType() string {
- if m != nil {
- return m.Type
- }
- return ""
-}
+var xxx_messageInfo_ResponseBeginBlock proto.InternalMessageInfo
-func (m *Event) GetAttributes() []EventAttribute {
+func (m *ResponseBeginBlock) GetEvents() []Event {
if m != nil {
- return m.Attributes
+ return m.Events
}
return nil
}
-// EventAttribute is a single key-value pair, associated with an event.
-type EventAttribute struct {
- Key string `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"`
- Value string `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
- Index bool `protobuf:"varint,3,opt,name=index,proto3" json:"index,omitempty"`
+type ResponseCheckTx struct {
+ Code uint32 `protobuf:"varint,1,opt,name=code,proto3" json:"code,omitempty"`
+ Data []byte `protobuf:"bytes,2,opt,name=data,proto3" json:"data,omitempty"`
+ Log string `protobuf:"bytes,3,opt,name=log,proto3" json:"log,omitempty"`
+ Info string `protobuf:"bytes,4,opt,name=info,proto3" json:"info,omitempty"`
+ GasWanted int64 `protobuf:"varint,5,opt,name=gas_wanted,json=gasWanted,proto3" json:"gas_wanted,omitempty"`
+ GasUsed int64 `protobuf:"varint,6,opt,name=gas_used,json=gasUsed,proto3" json:"gas_used,omitempty"`
+ Events []Event `protobuf:"bytes,7,rep,name=events,proto3" json:"events,omitempty"`
+ Codespace string `protobuf:"bytes,8,opt,name=codespace,proto3" json:"codespace,omitempty"`
+ Sender string `protobuf:"bytes,9,opt,name=sender,proto3" json:"sender,omitempty"`
+ Priority int64 `protobuf:"varint,10,opt,name=priority,proto3" json:"priority,omitempty"`
+ // ABCI applications creating a ResponseCheckTX should not set mempool_error.
+ MempoolError string `protobuf:"bytes,11,opt,name=mempool_error,json=mempoolError,proto3" json:"mempool_error,omitempty"`
}
-func (m *EventAttribute) Reset() { *m = EventAttribute{} }
-func (m *EventAttribute) String() string { return proto.CompactTextString(m) }
-func (*EventAttribute) ProtoMessage() {}
-func (*EventAttribute) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{33}
+func (m *ResponseCheckTx) Reset() { *m = ResponseCheckTx{} }
+func (m *ResponseCheckTx) String() string { return proto.CompactTextString(m) }
+func (*ResponseCheckTx) ProtoMessage() {}
+func (*ResponseCheckTx) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{28}
}
-func (m *EventAttribute) XXX_Unmarshal(b []byte) error {
+func (m *ResponseCheckTx) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
-func (m *EventAttribute) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+func (m *ResponseCheckTx) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
- return xxx_messageInfo_EventAttribute.Marshal(b, m, deterministic)
+ return xxx_messageInfo_ResponseCheckTx.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
@@ -2540,130 +2537,118 @@ func (m *EventAttribute) XXX_Marshal(b []byte, deterministic bool) ([]byte, erro
return b[:n], nil
}
}
-func (m *EventAttribute) XXX_Merge(src proto.Message) {
- xxx_messageInfo_EventAttribute.Merge(m, src)
+func (m *ResponseCheckTx) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ResponseCheckTx.Merge(m, src)
}
-func (m *EventAttribute) XXX_Size() int {
+func (m *ResponseCheckTx) XXX_Size() int {
return m.Size()
}
-func (m *EventAttribute) XXX_DiscardUnknown() {
- xxx_messageInfo_EventAttribute.DiscardUnknown(m)
+func (m *ResponseCheckTx) XXX_DiscardUnknown() {
+ xxx_messageInfo_ResponseCheckTx.DiscardUnknown(m)
}
-var xxx_messageInfo_EventAttribute proto.InternalMessageInfo
+var xxx_messageInfo_ResponseCheckTx proto.InternalMessageInfo
-func (m *EventAttribute) GetKey() string {
+func (m *ResponseCheckTx) GetCode() uint32 {
if m != nil {
- return m.Key
+ return m.Code
}
- return ""
+ return 0
}
-func (m *EventAttribute) GetValue() string {
+func (m *ResponseCheckTx) GetData() []byte {
if m != nil {
- return m.Value
+ return m.Data
}
- return ""
+ return nil
}
-func (m *EventAttribute) GetIndex() bool {
+func (m *ResponseCheckTx) GetLog() string {
if m != nil {
- return m.Index
+ return m.Log
}
- return false
+ return ""
}
-// TxResult contains results of executing the transaction.
-//
-// One usage is indexing transaction results.
-type TxResult struct {
- Height int64 `protobuf:"varint,1,opt,name=height,proto3" json:"height,omitempty"`
- Index uint32 `protobuf:"varint,2,opt,name=index,proto3" json:"index,omitempty"`
- Tx []byte `protobuf:"bytes,3,opt,name=tx,proto3" json:"tx,omitempty"`
- Result ResponseDeliverTx `protobuf:"bytes,4,opt,name=result,proto3" json:"result"`
+func (m *ResponseCheckTx) GetInfo() string {
+ if m != nil {
+ return m.Info
+ }
+ return ""
}
-func (m *TxResult) Reset() { *m = TxResult{} }
-func (m *TxResult) String() string { return proto.CompactTextString(m) }
-func (*TxResult) ProtoMessage() {}
-func (*TxResult) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{34}
-}
-func (m *TxResult) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *TxResult) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_TxResult.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
+func (m *ResponseCheckTx) GetGasWanted() int64 {
+ if m != nil {
+ return m.GasWanted
}
+ return 0
}
-func (m *TxResult) XXX_Merge(src proto.Message) {
- xxx_messageInfo_TxResult.Merge(m, src)
-}
-func (m *TxResult) XXX_Size() int {
- return m.Size()
-}
-func (m *TxResult) XXX_DiscardUnknown() {
- xxx_messageInfo_TxResult.DiscardUnknown(m)
+
+func (m *ResponseCheckTx) GetGasUsed() int64 {
+ if m != nil {
+ return m.GasUsed
+ }
+ return 0
}
-var xxx_messageInfo_TxResult proto.InternalMessageInfo
+func (m *ResponseCheckTx) GetEvents() []Event {
+ if m != nil {
+ return m.Events
+ }
+ return nil
+}
-func (m *TxResult) GetHeight() int64 {
+func (m *ResponseCheckTx) GetCodespace() string {
if m != nil {
- return m.Height
+ return m.Codespace
}
- return 0
+ return ""
}
-func (m *TxResult) GetIndex() uint32 {
+func (m *ResponseCheckTx) GetSender() string {
if m != nil {
- return m.Index
+ return m.Sender
}
- return 0
+ return ""
}
-func (m *TxResult) GetTx() []byte {
+func (m *ResponseCheckTx) GetPriority() int64 {
if m != nil {
- return m.Tx
+ return m.Priority
}
- return nil
+ return 0
}
-func (m *TxResult) GetResult() ResponseDeliverTx {
+func (m *ResponseCheckTx) GetMempoolError() string {
if m != nil {
- return m.Result
+ return m.MempoolError
}
- return ResponseDeliverTx{}
+ return ""
}
-// Validator
-type Validator struct {
- // bytes address = 1; // The first 20 bytes of SHA256(public key)
- // PubKey pub_key = 2 [(gogoproto.nullable)=false];
- Power int64 `protobuf:"varint,3,opt,name=power,proto3" json:"power,omitempty"`
- ProTxHash []byte `protobuf:"bytes,4,opt,name=pro_tx_hash,json=proTxHash,proto3" json:"pro_tx_hash,omitempty"`
+type ResponseDeliverTx struct {
+ Code uint32 `protobuf:"varint,1,opt,name=code,proto3" json:"code,omitempty"`
+ Data []byte `protobuf:"bytes,2,opt,name=data,proto3" json:"data,omitempty"`
+ Log string `protobuf:"bytes,3,opt,name=log,proto3" json:"log,omitempty"`
+ Info string `protobuf:"bytes,4,opt,name=info,proto3" json:"info,omitempty"`
+ GasWanted int64 `protobuf:"varint,5,opt,name=gas_wanted,proto3" json:"gas_wanted,omitempty"`
+ GasUsed int64 `protobuf:"varint,6,opt,name=gas_used,proto3" json:"gas_used,omitempty"`
+ Events []Event `protobuf:"bytes,7,rep,name=events,proto3" json:"events,omitempty"`
+ Codespace string `protobuf:"bytes,8,opt,name=codespace,proto3" json:"codespace,omitempty"`
}
-func (m *Validator) Reset() { *m = Validator{} }
-func (m *Validator) String() string { return proto.CompactTextString(m) }
-func (*Validator) ProtoMessage() {}
-func (*Validator) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{35}
+func (m *ResponseDeliverTx) Reset() { *m = ResponseDeliverTx{} }
+func (m *ResponseDeliverTx) String() string { return proto.CompactTextString(m) }
+func (*ResponseDeliverTx) ProtoMessage() {}
+func (*ResponseDeliverTx) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{29}
}
-func (m *Validator) XXX_Unmarshal(b []byte) error {
+func (m *ResponseDeliverTx) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
-func (m *Validator) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+func (m *ResponseDeliverTx) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
- return xxx_messageInfo_Validator.Marshal(b, m, deterministic)
+ return xxx_messageInfo_ResponseDeliverTx.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
@@ -2673,119 +2658,93 @@ func (m *Validator) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return b[:n], nil
}
}
-func (m *Validator) XXX_Merge(src proto.Message) {
- xxx_messageInfo_Validator.Merge(m, src)
+func (m *ResponseDeliverTx) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ResponseDeliverTx.Merge(m, src)
}
-func (m *Validator) XXX_Size() int {
+func (m *ResponseDeliverTx) XXX_Size() int {
return m.Size()
}
-func (m *Validator) XXX_DiscardUnknown() {
- xxx_messageInfo_Validator.DiscardUnknown(m)
+func (m *ResponseDeliverTx) XXX_DiscardUnknown() {
+ xxx_messageInfo_ResponseDeliverTx.DiscardUnknown(m)
}
-var xxx_messageInfo_Validator proto.InternalMessageInfo
+var xxx_messageInfo_ResponseDeliverTx proto.InternalMessageInfo
-func (m *Validator) GetPower() int64 {
+func (m *ResponseDeliverTx) GetCode() uint32 {
if m != nil {
- return m.Power
+ return m.Code
}
return 0
}
-func (m *Validator) GetProTxHash() []byte {
+func (m *ResponseDeliverTx) GetData() []byte {
if m != nil {
- return m.ProTxHash
+ return m.Data
}
return nil
}
-// ValidatorUpdate
-type ValidatorUpdate struct {
- PubKey *crypto.PublicKey `protobuf:"bytes,1,opt,name=pub_key,json=pubKey,proto3" json:"pub_key,omitempty"`
- Power int64 `protobuf:"varint,2,opt,name=power,proto3" json:"power,omitempty"`
- ProTxHash []byte `protobuf:"bytes,3,opt,name=pro_tx_hash,json=proTxHash,proto3" json:"pro_tx_hash,omitempty"`
- NodeAddress string `protobuf:"bytes,4,opt,name=node_address,json=nodeAddress,proto3" json:"node_address,omitempty"`
+func (m *ResponseDeliverTx) GetLog() string {
+ if m != nil {
+ return m.Log
+ }
+ return ""
}
-func (m *ValidatorUpdate) Reset() { *m = ValidatorUpdate{} }
-func (m *ValidatorUpdate) String() string { return proto.CompactTextString(m) }
-func (*ValidatorUpdate) ProtoMessage() {}
-func (*ValidatorUpdate) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{36}
+func (m *ResponseDeliverTx) GetInfo() string {
+ if m != nil {
+ return m.Info
+ }
+ return ""
}
-func (m *ValidatorUpdate) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *ValidatorUpdate) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- if deterministic {
- return xxx_messageInfo_ValidatorUpdate.Marshal(b, m, deterministic)
- } else {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
- }
-}
-func (m *ValidatorUpdate) XXX_Merge(src proto.Message) {
- xxx_messageInfo_ValidatorUpdate.Merge(m, src)
-}
-func (m *ValidatorUpdate) XXX_Size() int {
- return m.Size()
-}
-func (m *ValidatorUpdate) XXX_DiscardUnknown() {
- xxx_messageInfo_ValidatorUpdate.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_ValidatorUpdate proto.InternalMessageInfo
-func (m *ValidatorUpdate) GetPubKey() *crypto.PublicKey {
+func (m *ResponseDeliverTx) GetGasWanted() int64 {
if m != nil {
- return m.PubKey
+ return m.GasWanted
}
- return nil
+ return 0
}
-func (m *ValidatorUpdate) GetPower() int64 {
+func (m *ResponseDeliverTx) GetGasUsed() int64 {
if m != nil {
- return m.Power
+ return m.GasUsed
}
return 0
}
-func (m *ValidatorUpdate) GetProTxHash() []byte {
+func (m *ResponseDeliverTx) GetEvents() []Event {
if m != nil {
- return m.ProTxHash
+ return m.Events
}
return nil
}
-func (m *ValidatorUpdate) GetNodeAddress() string {
+func (m *ResponseDeliverTx) GetCodespace() string {
if m != nil {
- return m.NodeAddress
+ return m.Codespace
}
return ""
}
-type ValidatorSetUpdate struct {
- ValidatorUpdates []ValidatorUpdate `protobuf:"bytes,1,rep,name=validator_updates,json=validatorUpdates,proto3" json:"validator_updates"`
- ThresholdPublicKey crypto.PublicKey `protobuf:"bytes,2,opt,name=threshold_public_key,json=thresholdPublicKey,proto3" json:"threshold_public_key"`
- QuorumHash []byte `protobuf:"bytes,3,opt,name=quorum_hash,json=quorumHash,proto3" json:"quorum_hash,omitempty"`
+type ResponseEndBlock struct {
+ ConsensusParamUpdates *types1.ConsensusParams `protobuf:"bytes,2,opt,name=consensus_param_updates,json=consensusParamUpdates,proto3" json:"consensus_param_updates,omitempty"`
+ Events []Event `protobuf:"bytes,3,rep,name=events,proto3" json:"events,omitempty"`
+ NextCoreChainLockUpdate *types1.CoreChainLock `protobuf:"bytes,100,opt,name=next_core_chain_lock_update,json=nextCoreChainLockUpdate,proto3" json:"next_core_chain_lock_update,omitempty"`
+ ValidatorSetUpdate *ValidatorSetUpdate `protobuf:"bytes,101,opt,name=validator_set_update,json=validatorSetUpdate,proto3" json:"validator_set_update,omitempty"`
}
-func (m *ValidatorSetUpdate) Reset() { *m = ValidatorSetUpdate{} }
-func (m *ValidatorSetUpdate) String() string { return proto.CompactTextString(m) }
-func (*ValidatorSetUpdate) ProtoMessage() {}
-func (*ValidatorSetUpdate) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{37}
+func (m *ResponseEndBlock) Reset() { *m = ResponseEndBlock{} }
+func (m *ResponseEndBlock) String() string { return proto.CompactTextString(m) }
+func (*ResponseEndBlock) ProtoMessage() {}
+func (*ResponseEndBlock) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{30}
}
-func (m *ValidatorSetUpdate) XXX_Unmarshal(b []byte) error {
+func (m *ResponseEndBlock) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
-func (m *ValidatorSetUpdate) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+func (m *ResponseEndBlock) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
- return xxx_messageInfo_ValidatorSetUpdate.Marshal(b, m, deterministic)
+ return xxx_messageInfo_ResponseEndBlock.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
@@ -2795,55 +2754,64 @@ func (m *ValidatorSetUpdate) XXX_Marshal(b []byte, deterministic bool) ([]byte,
return b[:n], nil
}
}
-func (m *ValidatorSetUpdate) XXX_Merge(src proto.Message) {
- xxx_messageInfo_ValidatorSetUpdate.Merge(m, src)
+func (m *ResponseEndBlock) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ResponseEndBlock.Merge(m, src)
}
-func (m *ValidatorSetUpdate) XXX_Size() int {
+func (m *ResponseEndBlock) XXX_Size() int {
return m.Size()
}
-func (m *ValidatorSetUpdate) XXX_DiscardUnknown() {
- xxx_messageInfo_ValidatorSetUpdate.DiscardUnknown(m)
+func (m *ResponseEndBlock) XXX_DiscardUnknown() {
+ xxx_messageInfo_ResponseEndBlock.DiscardUnknown(m)
}
-var xxx_messageInfo_ValidatorSetUpdate proto.InternalMessageInfo
+var xxx_messageInfo_ResponseEndBlock proto.InternalMessageInfo
-func (m *ValidatorSetUpdate) GetValidatorUpdates() []ValidatorUpdate {
+func (m *ResponseEndBlock) GetConsensusParamUpdates() *types1.ConsensusParams {
if m != nil {
- return m.ValidatorUpdates
+ return m.ConsensusParamUpdates
}
return nil
}
-func (m *ValidatorSetUpdate) GetThresholdPublicKey() crypto.PublicKey {
+func (m *ResponseEndBlock) GetEvents() []Event {
if m != nil {
- return m.ThresholdPublicKey
+ return m.Events
}
- return crypto.PublicKey{}
+ return nil
}
-func (m *ValidatorSetUpdate) GetQuorumHash() []byte {
+func (m *ResponseEndBlock) GetNextCoreChainLockUpdate() *types1.CoreChainLock {
if m != nil {
- return m.QuorumHash
+ return m.NextCoreChainLockUpdate
}
return nil
}
-type ThresholdPublicKeyUpdate struct {
- ThresholdPublicKey crypto.PublicKey `protobuf:"bytes,1,opt,name=threshold_public_key,json=thresholdPublicKey,proto3" json:"threshold_public_key"`
+func (m *ResponseEndBlock) GetValidatorSetUpdate() *ValidatorSetUpdate {
+ if m != nil {
+ return m.ValidatorSetUpdate
+ }
+ return nil
}
-func (m *ThresholdPublicKeyUpdate) Reset() { *m = ThresholdPublicKeyUpdate{} }
-func (m *ThresholdPublicKeyUpdate) String() string { return proto.CompactTextString(m) }
-func (*ThresholdPublicKeyUpdate) ProtoMessage() {}
-func (*ThresholdPublicKeyUpdate) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{38}
+type ResponseCommit struct {
+ // reserve 1
+ Data []byte `protobuf:"bytes,2,opt,name=data,proto3" json:"data,omitempty"`
+ RetainHeight int64 `protobuf:"varint,3,opt,name=retain_height,json=retainHeight,proto3" json:"retain_height,omitempty"`
}
-func (m *ThresholdPublicKeyUpdate) XXX_Unmarshal(b []byte) error {
+
+func (m *ResponseCommit) Reset() { *m = ResponseCommit{} }
+func (m *ResponseCommit) String() string { return proto.CompactTextString(m) }
+func (*ResponseCommit) ProtoMessage() {}
+func (*ResponseCommit) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{31}
+}
+func (m *ResponseCommit) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
-func (m *ThresholdPublicKeyUpdate) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+func (m *ResponseCommit) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
- return xxx_messageInfo_ThresholdPublicKeyUpdate.Marshal(b, m, deterministic)
+ return xxx_messageInfo_ResponseCommit.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
@@ -2853,41 +2821,48 @@ func (m *ThresholdPublicKeyUpdate) XXX_Marshal(b []byte, deterministic bool) ([]
return b[:n], nil
}
}
-func (m *ThresholdPublicKeyUpdate) XXX_Merge(src proto.Message) {
- xxx_messageInfo_ThresholdPublicKeyUpdate.Merge(m, src)
+func (m *ResponseCommit) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ResponseCommit.Merge(m, src)
}
-func (m *ThresholdPublicKeyUpdate) XXX_Size() int {
+func (m *ResponseCommit) XXX_Size() int {
return m.Size()
}
-func (m *ThresholdPublicKeyUpdate) XXX_DiscardUnknown() {
- xxx_messageInfo_ThresholdPublicKeyUpdate.DiscardUnknown(m)
+func (m *ResponseCommit) XXX_DiscardUnknown() {
+ xxx_messageInfo_ResponseCommit.DiscardUnknown(m)
}
-var xxx_messageInfo_ThresholdPublicKeyUpdate proto.InternalMessageInfo
+var xxx_messageInfo_ResponseCommit proto.InternalMessageInfo
-func (m *ThresholdPublicKeyUpdate) GetThresholdPublicKey() crypto.PublicKey {
+func (m *ResponseCommit) GetData() []byte {
if m != nil {
- return m.ThresholdPublicKey
+ return m.Data
}
- return crypto.PublicKey{}
+ return nil
}
-type QuorumHashUpdate struct {
- QuorumHash []byte `protobuf:"bytes,1,opt,name=quorum_hash,json=quorumHash,proto3" json:"quorum_hash,omitempty"`
+func (m *ResponseCommit) GetRetainHeight() int64 {
+ if m != nil {
+ return m.RetainHeight
+ }
+ return 0
}
-func (m *QuorumHashUpdate) Reset() { *m = QuorumHashUpdate{} }
-func (m *QuorumHashUpdate) String() string { return proto.CompactTextString(m) }
-func (*QuorumHashUpdate) ProtoMessage() {}
-func (*QuorumHashUpdate) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{39}
+type ResponseListSnapshots struct {
+ Snapshots []*Snapshot `protobuf:"bytes,1,rep,name=snapshots,proto3" json:"snapshots,omitempty"`
}
-func (m *QuorumHashUpdate) XXX_Unmarshal(b []byte) error {
+
+func (m *ResponseListSnapshots) Reset() { *m = ResponseListSnapshots{} }
+func (m *ResponseListSnapshots) String() string { return proto.CompactTextString(m) }
+func (*ResponseListSnapshots) ProtoMessage() {}
+func (*ResponseListSnapshots) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{32}
+}
+func (m *ResponseListSnapshots) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
-func (m *QuorumHashUpdate) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+func (m *ResponseListSnapshots) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
- return xxx_messageInfo_QuorumHashUpdate.Marshal(b, m, deterministic)
+ return xxx_messageInfo_ResponseListSnapshots.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
@@ -2897,43 +2872,41 @@ func (m *QuorumHashUpdate) XXX_Marshal(b []byte, deterministic bool) ([]byte, er
return b[:n], nil
}
}
-func (m *QuorumHashUpdate) XXX_Merge(src proto.Message) {
- xxx_messageInfo_QuorumHashUpdate.Merge(m, src)
+func (m *ResponseListSnapshots) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ResponseListSnapshots.Merge(m, src)
}
-func (m *QuorumHashUpdate) XXX_Size() int {
+func (m *ResponseListSnapshots) XXX_Size() int {
return m.Size()
}
-func (m *QuorumHashUpdate) XXX_DiscardUnknown() {
- xxx_messageInfo_QuorumHashUpdate.DiscardUnknown(m)
+func (m *ResponseListSnapshots) XXX_DiscardUnknown() {
+ xxx_messageInfo_ResponseListSnapshots.DiscardUnknown(m)
}
-var xxx_messageInfo_QuorumHashUpdate proto.InternalMessageInfo
+var xxx_messageInfo_ResponseListSnapshots proto.InternalMessageInfo
-func (m *QuorumHashUpdate) GetQuorumHash() []byte {
+func (m *ResponseListSnapshots) GetSnapshots() []*Snapshot {
if m != nil {
- return m.QuorumHash
+ return m.Snapshots
}
return nil
}
-// VoteInfo
-type VoteInfo struct {
- Validator Validator `protobuf:"bytes,1,opt,name=validator,proto3" json:"validator"`
- SignedLastBlock bool `protobuf:"varint,2,opt,name=signed_last_block,json=signedLastBlock,proto3" json:"signed_last_block,omitempty"`
+type ResponseOfferSnapshot struct {
+ Result ResponseOfferSnapshot_Result `protobuf:"varint,1,opt,name=result,proto3,enum=tendermint.abci.ResponseOfferSnapshot_Result" json:"result,omitempty"`
}
-func (m *VoteInfo) Reset() { *m = VoteInfo{} }
-func (m *VoteInfo) String() string { return proto.CompactTextString(m) }
-func (*VoteInfo) ProtoMessage() {}
-func (*VoteInfo) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{40}
+func (m *ResponseOfferSnapshot) Reset() { *m = ResponseOfferSnapshot{} }
+func (m *ResponseOfferSnapshot) String() string { return proto.CompactTextString(m) }
+func (*ResponseOfferSnapshot) ProtoMessage() {}
+func (*ResponseOfferSnapshot) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{33}
}
-func (m *VoteInfo) XXX_Unmarshal(b []byte) error {
+func (m *ResponseOfferSnapshot) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
-func (m *VoteInfo) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+func (m *ResponseOfferSnapshot) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
- return xxx_messageInfo_VoteInfo.Marshal(b, m, deterministic)
+ return xxx_messageInfo_ResponseOfferSnapshot.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
@@ -2943,58 +2916,41 @@ func (m *VoteInfo) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return b[:n], nil
}
}
-func (m *VoteInfo) XXX_Merge(src proto.Message) {
- xxx_messageInfo_VoteInfo.Merge(m, src)
+func (m *ResponseOfferSnapshot) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ResponseOfferSnapshot.Merge(m, src)
}
-func (m *VoteInfo) XXX_Size() int {
+func (m *ResponseOfferSnapshot) XXX_Size() int {
return m.Size()
}
-func (m *VoteInfo) XXX_DiscardUnknown() {
- xxx_messageInfo_VoteInfo.DiscardUnknown(m)
+func (m *ResponseOfferSnapshot) XXX_DiscardUnknown() {
+ xxx_messageInfo_ResponseOfferSnapshot.DiscardUnknown(m)
}
-var xxx_messageInfo_VoteInfo proto.InternalMessageInfo
-
-func (m *VoteInfo) GetValidator() Validator {
- if m != nil {
- return m.Validator
- }
- return Validator{}
-}
+var xxx_messageInfo_ResponseOfferSnapshot proto.InternalMessageInfo
-func (m *VoteInfo) GetSignedLastBlock() bool {
+func (m *ResponseOfferSnapshot) GetResult() ResponseOfferSnapshot_Result {
if m != nil {
- return m.SignedLastBlock
+ return m.Result
}
- return false
+ return ResponseOfferSnapshot_UNKNOWN
}
-type Evidence struct {
- Type EvidenceType `protobuf:"varint,1,opt,name=type,proto3,enum=tendermint.abci.EvidenceType" json:"type,omitempty"`
- // The offending validator
- Validator Validator `protobuf:"bytes,2,opt,name=validator,proto3" json:"validator"`
- // The height when the offense occurred
- Height int64 `protobuf:"varint,3,opt,name=height,proto3" json:"height,omitempty"`
- // The corresponding time where the offense occurred
- Time time.Time `protobuf:"bytes,4,opt,name=time,proto3,stdtime" json:"time"`
- // Total voting power of the validator set in case the ABCI application does
- // not store historical validators.
- // https://github.com/tendermint/tendermint/issues/4581
- TotalVotingPower int64 `protobuf:"varint,5,opt,name=total_voting_power,json=totalVotingPower,proto3" json:"total_voting_power,omitempty"`
+type ResponseLoadSnapshotChunk struct {
+ Chunk []byte `protobuf:"bytes,1,opt,name=chunk,proto3" json:"chunk,omitempty"`
}
-func (m *Evidence) Reset() { *m = Evidence{} }
-func (m *Evidence) String() string { return proto.CompactTextString(m) }
-func (*Evidence) ProtoMessage() {}
-func (*Evidence) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{41}
+func (m *ResponseLoadSnapshotChunk) Reset() { *m = ResponseLoadSnapshotChunk{} }
+func (m *ResponseLoadSnapshotChunk) String() string { return proto.CompactTextString(m) }
+func (*ResponseLoadSnapshotChunk) ProtoMessage() {}
+func (*ResponseLoadSnapshotChunk) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{34}
}
-func (m *Evidence) XXX_Unmarshal(b []byte) error {
+func (m *ResponseLoadSnapshotChunk) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
-func (m *Evidence) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+func (m *ResponseLoadSnapshotChunk) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
- return xxx_messageInfo_Evidence.Marshal(b, m, deterministic)
+ return xxx_messageInfo_ResponseLoadSnapshotChunk.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
@@ -3004,74 +2960,105 @@ func (m *Evidence) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return b[:n], nil
}
}
-func (m *Evidence) XXX_Merge(src proto.Message) {
- xxx_messageInfo_Evidence.Merge(m, src)
+func (m *ResponseLoadSnapshotChunk) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ResponseLoadSnapshotChunk.Merge(m, src)
}
-func (m *Evidence) XXX_Size() int {
+func (m *ResponseLoadSnapshotChunk) XXX_Size() int {
return m.Size()
}
-func (m *Evidence) XXX_DiscardUnknown() {
- xxx_messageInfo_Evidence.DiscardUnknown(m)
-}
+func (m *ResponseLoadSnapshotChunk) XXX_DiscardUnknown() {
+ xxx_messageInfo_ResponseLoadSnapshotChunk.DiscardUnknown(m)
+}
-var xxx_messageInfo_Evidence proto.InternalMessageInfo
+var xxx_messageInfo_ResponseLoadSnapshotChunk proto.InternalMessageInfo
-func (m *Evidence) GetType() EvidenceType {
+func (m *ResponseLoadSnapshotChunk) GetChunk() []byte {
if m != nil {
- return m.Type
+ return m.Chunk
}
- return EvidenceType_UNKNOWN
+ return nil
}
-func (m *Evidence) GetValidator() Validator {
- if m != nil {
- return m.Validator
+type ResponseApplySnapshotChunk struct {
+ Result ResponseApplySnapshotChunk_Result `protobuf:"varint,1,opt,name=result,proto3,enum=tendermint.abci.ResponseApplySnapshotChunk_Result" json:"result,omitempty"`
+ RefetchChunks []uint32 `protobuf:"varint,2,rep,packed,name=refetch_chunks,json=refetchChunks,proto3" json:"refetch_chunks,omitempty"`
+ RejectSenders []string `protobuf:"bytes,3,rep,name=reject_senders,json=rejectSenders,proto3" json:"reject_senders,omitempty"`
+}
+
+func (m *ResponseApplySnapshotChunk) Reset() { *m = ResponseApplySnapshotChunk{} }
+func (m *ResponseApplySnapshotChunk) String() string { return proto.CompactTextString(m) }
+func (*ResponseApplySnapshotChunk) ProtoMessage() {}
+func (*ResponseApplySnapshotChunk) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{35}
+}
+func (m *ResponseApplySnapshotChunk) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *ResponseApplySnapshotChunk) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_ResponseApplySnapshotChunk.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
}
- return Validator{}
}
+func (m *ResponseApplySnapshotChunk) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ResponseApplySnapshotChunk.Merge(m, src)
+}
+func (m *ResponseApplySnapshotChunk) XXX_Size() int {
+ return m.Size()
+}
+func (m *ResponseApplySnapshotChunk) XXX_DiscardUnknown() {
+ xxx_messageInfo_ResponseApplySnapshotChunk.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_ResponseApplySnapshotChunk proto.InternalMessageInfo
-func (m *Evidence) GetHeight() int64 {
+func (m *ResponseApplySnapshotChunk) GetResult() ResponseApplySnapshotChunk_Result {
if m != nil {
- return m.Height
+ return m.Result
}
- return 0
+ return ResponseApplySnapshotChunk_UNKNOWN
}
-func (m *Evidence) GetTime() time.Time {
+func (m *ResponseApplySnapshotChunk) GetRefetchChunks() []uint32 {
if m != nil {
- return m.Time
+ return m.RefetchChunks
}
- return time.Time{}
+ return nil
}
-func (m *Evidence) GetTotalVotingPower() int64 {
+func (m *ResponseApplySnapshotChunk) GetRejectSenders() []string {
if m != nil {
- return m.TotalVotingPower
+ return m.RejectSenders
}
- return 0
+ return nil
}
-type Snapshot struct {
- Height uint64 `protobuf:"varint,1,opt,name=height,proto3" json:"height,omitempty"`
- Format uint32 `protobuf:"varint,2,opt,name=format,proto3" json:"format,omitempty"`
- Chunks uint32 `protobuf:"varint,3,opt,name=chunks,proto3" json:"chunks,omitempty"`
- Hash []byte `protobuf:"bytes,4,opt,name=hash,proto3" json:"hash,omitempty"`
- Metadata []byte `protobuf:"bytes,5,opt,name=metadata,proto3" json:"metadata,omitempty"`
- CoreChainLockedHeight uint32 `protobuf:"varint,100,opt,name=core_chain_locked_height,json=coreChainLockedHeight,proto3" json:"core_chain_locked_height,omitempty"`
+type ResponsePrepareProposal struct {
+ TxRecords []*TxRecord `protobuf:"bytes,1,rep,name=tx_records,json=txRecords,proto3" json:"tx_records,omitempty"`
+ AppHash []byte `protobuf:"bytes,2,opt,name=app_hash,json=appHash,proto3" json:"app_hash,omitempty"`
+ TxResults []*ExecTxResult `protobuf:"bytes,3,rep,name=tx_results,json=txResults,proto3" json:"tx_results,omitempty"`
+ ValidatorUpdates []*ValidatorUpdate `protobuf:"bytes,4,rep,name=validator_updates,json=validatorUpdates,proto3" json:"validator_updates,omitempty"`
+ ConsensusParamUpdates *types1.ConsensusParams `protobuf:"bytes,5,opt,name=consensus_param_updates,json=consensusParamUpdates,proto3" json:"consensus_param_updates,omitempty"`
}
-func (m *Snapshot) Reset() { *m = Snapshot{} }
-func (m *Snapshot) String() string { return proto.CompactTextString(m) }
-func (*Snapshot) ProtoMessage() {}
-func (*Snapshot) Descriptor() ([]byte, []int) {
- return fileDescriptor_252557cfdd89a31a, []int{42}
+func (m *ResponsePrepareProposal) Reset() { *m = ResponsePrepareProposal{} }
+func (m *ResponsePrepareProposal) String() string { return proto.CompactTextString(m) }
+func (*ResponsePrepareProposal) ProtoMessage() {}
+func (*ResponsePrepareProposal) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{36}
}
-func (m *Snapshot) XXX_Unmarshal(b []byte) error {
+func (m *ResponsePrepareProposal) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
-func (m *Snapshot) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+func (m *ResponsePrepareProposal) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
- return xxx_messageInfo_Snapshot.Marshal(b, m, deterministic)
+ return xxx_messageInfo_ResponsePrepareProposal.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
@@ -3081,2589 +3068,2260 @@ func (m *Snapshot) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return b[:n], nil
}
}
-func (m *Snapshot) XXX_Merge(src proto.Message) {
- xxx_messageInfo_Snapshot.Merge(m, src)
+func (m *ResponsePrepareProposal) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ResponsePrepareProposal.Merge(m, src)
}
-func (m *Snapshot) XXX_Size() int {
+func (m *ResponsePrepareProposal) XXX_Size() int {
return m.Size()
}
-func (m *Snapshot) XXX_DiscardUnknown() {
- xxx_messageInfo_Snapshot.DiscardUnknown(m)
+func (m *ResponsePrepareProposal) XXX_DiscardUnknown() {
+ xxx_messageInfo_ResponsePrepareProposal.DiscardUnknown(m)
}
-var xxx_messageInfo_Snapshot proto.InternalMessageInfo
+var xxx_messageInfo_ResponsePrepareProposal proto.InternalMessageInfo
-func (m *Snapshot) GetHeight() uint64 {
+func (m *ResponsePrepareProposal) GetTxRecords() []*TxRecord {
if m != nil {
- return m.Height
+ return m.TxRecords
}
- return 0
+ return nil
}
-func (m *Snapshot) GetFormat() uint32 {
+func (m *ResponsePrepareProposal) GetAppHash() []byte {
if m != nil {
- return m.Format
+ return m.AppHash
}
- return 0
+ return nil
}
-func (m *Snapshot) GetChunks() uint32 {
+func (m *ResponsePrepareProposal) GetTxResults() []*ExecTxResult {
if m != nil {
- return m.Chunks
+ return m.TxResults
}
- return 0
+ return nil
}
-func (m *Snapshot) GetHash() []byte {
+func (m *ResponsePrepareProposal) GetValidatorUpdates() []*ValidatorUpdate {
if m != nil {
- return m.Hash
+ return m.ValidatorUpdates
}
return nil
}
-func (m *Snapshot) GetMetadata() []byte {
+func (m *ResponsePrepareProposal) GetConsensusParamUpdates() *types1.ConsensusParams {
if m != nil {
- return m.Metadata
+ return m.ConsensusParamUpdates
}
return nil
}
-func (m *Snapshot) GetCoreChainLockedHeight() uint32 {
- if m != nil {
- return m.CoreChainLockedHeight
- }
- return 0
+type ResponseProcessProposal struct {
+ Status ResponseProcessProposal_ProposalStatus `protobuf:"varint,1,opt,name=status,proto3,enum=tendermint.abci.ResponseProcessProposal_ProposalStatus" json:"status,omitempty"`
+ AppHash []byte `protobuf:"bytes,2,opt,name=app_hash,json=appHash,proto3" json:"app_hash,omitempty"`
+ TxResults []*ExecTxResult `protobuf:"bytes,3,rep,name=tx_results,json=txResults,proto3" json:"tx_results,omitempty"`
+ ValidatorUpdates []*ValidatorUpdate `protobuf:"bytes,4,rep,name=validator_updates,json=validatorUpdates,proto3" json:"validator_updates,omitempty"`
+ ConsensusParamUpdates *types1.ConsensusParams `protobuf:"bytes,5,opt,name=consensus_param_updates,json=consensusParamUpdates,proto3" json:"consensus_param_updates,omitempty"`
}
-func init() {
- proto.RegisterEnum("tendermint.abci.CheckTxType", CheckTxType_name, CheckTxType_value)
- proto.RegisterEnum("tendermint.abci.EvidenceType", EvidenceType_name, EvidenceType_value)
- proto.RegisterEnum("tendermint.abci.ResponseOfferSnapshot_Result", ResponseOfferSnapshot_Result_name, ResponseOfferSnapshot_Result_value)
- proto.RegisterEnum("tendermint.abci.ResponseApplySnapshotChunk_Result", ResponseApplySnapshotChunk_Result_name, ResponseApplySnapshotChunk_Result_value)
- proto.RegisterType((*Request)(nil), "tendermint.abci.Request")
- proto.RegisterType((*RequestEcho)(nil), "tendermint.abci.RequestEcho")
- proto.RegisterType((*RequestFlush)(nil), "tendermint.abci.RequestFlush")
- proto.RegisterType((*RequestInfo)(nil), "tendermint.abci.RequestInfo")
- proto.RegisterType((*RequestInitChain)(nil), "tendermint.abci.RequestInitChain")
- proto.RegisterType((*RequestQuery)(nil), "tendermint.abci.RequestQuery")
- proto.RegisterType((*RequestBeginBlock)(nil), "tendermint.abci.RequestBeginBlock")
- proto.RegisterType((*RequestCheckTx)(nil), "tendermint.abci.RequestCheckTx")
- proto.RegisterType((*RequestDeliverTx)(nil), "tendermint.abci.RequestDeliverTx")
- proto.RegisterType((*RequestEndBlock)(nil), "tendermint.abci.RequestEndBlock")
- proto.RegisterType((*RequestCommit)(nil), "tendermint.abci.RequestCommit")
- proto.RegisterType((*RequestListSnapshots)(nil), "tendermint.abci.RequestListSnapshots")
- proto.RegisterType((*RequestOfferSnapshot)(nil), "tendermint.abci.RequestOfferSnapshot")
- proto.RegisterType((*RequestLoadSnapshotChunk)(nil), "tendermint.abci.RequestLoadSnapshotChunk")
- proto.RegisterType((*RequestApplySnapshotChunk)(nil), "tendermint.abci.RequestApplySnapshotChunk")
- proto.RegisterType((*Response)(nil), "tendermint.abci.Response")
- proto.RegisterType((*ResponseException)(nil), "tendermint.abci.ResponseException")
- proto.RegisterType((*ResponseEcho)(nil), "tendermint.abci.ResponseEcho")
- proto.RegisterType((*ResponseFlush)(nil), "tendermint.abci.ResponseFlush")
- proto.RegisterType((*ResponseInfo)(nil), "tendermint.abci.ResponseInfo")
- proto.RegisterType((*ResponseInitChain)(nil), "tendermint.abci.ResponseInitChain")
- proto.RegisterType((*ResponseQuery)(nil), "tendermint.abci.ResponseQuery")
- proto.RegisterType((*ResponseBeginBlock)(nil), "tendermint.abci.ResponseBeginBlock")
- proto.RegisterType((*ResponseCheckTx)(nil), "tendermint.abci.ResponseCheckTx")
- proto.RegisterType((*ResponseDeliverTx)(nil), "tendermint.abci.ResponseDeliverTx")
- proto.RegisterType((*ResponseEndBlock)(nil), "tendermint.abci.ResponseEndBlock")
- proto.RegisterType((*ResponseCommit)(nil), "tendermint.abci.ResponseCommit")
- proto.RegisterType((*ResponseListSnapshots)(nil), "tendermint.abci.ResponseListSnapshots")
- proto.RegisterType((*ResponseOfferSnapshot)(nil), "tendermint.abci.ResponseOfferSnapshot")
- proto.RegisterType((*ResponseLoadSnapshotChunk)(nil), "tendermint.abci.ResponseLoadSnapshotChunk")
- proto.RegisterType((*ResponseApplySnapshotChunk)(nil), "tendermint.abci.ResponseApplySnapshotChunk")
- proto.RegisterType((*LastCommitInfo)(nil), "tendermint.abci.LastCommitInfo")
- proto.RegisterType((*Event)(nil), "tendermint.abci.Event")
- proto.RegisterType((*EventAttribute)(nil), "tendermint.abci.EventAttribute")
- proto.RegisterType((*TxResult)(nil), "tendermint.abci.TxResult")
- proto.RegisterType((*Validator)(nil), "tendermint.abci.Validator")
- proto.RegisterType((*ValidatorUpdate)(nil), "tendermint.abci.ValidatorUpdate")
- proto.RegisterType((*ValidatorSetUpdate)(nil), "tendermint.abci.ValidatorSetUpdate")
- proto.RegisterType((*ThresholdPublicKeyUpdate)(nil), "tendermint.abci.ThresholdPublicKeyUpdate")
- proto.RegisterType((*QuorumHashUpdate)(nil), "tendermint.abci.QuorumHashUpdate")
- proto.RegisterType((*VoteInfo)(nil), "tendermint.abci.VoteInfo")
- proto.RegisterType((*Evidence)(nil), "tendermint.abci.Evidence")
- proto.RegisterType((*Snapshot)(nil), "tendermint.abci.Snapshot")
+func (m *ResponseProcessProposal) Reset() { *m = ResponseProcessProposal{} }
+func (m *ResponseProcessProposal) String() string { return proto.CompactTextString(m) }
+func (*ResponseProcessProposal) ProtoMessage() {}
+func (*ResponseProcessProposal) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{37}
}
-
-func init() { proto.RegisterFile("tendermint/abci/types.proto", fileDescriptor_252557cfdd89a31a) }
-
-var fileDescriptor_252557cfdd89a31a = []byte{
- // 2912 bytes of a gzipped FileDescriptorProto
- 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe4, 0x5a, 0x4b, 0x73, 0xe3, 0xc6,
- 0xf1, 0x27, 0xf8, 0x66, 0xf3, 0x21, 0x6a, 0x56, 0x5e, 0xd3, 0xf4, 0xae, 0xb4, 0x86, 0xcb, 0xf6,
- 0x7a, 0x6d, 0x4b, 0x7f, 0x6b, 0xcb, 0xaf, 0xfa, 0xe7, 0x61, 0x92, 0xe6, 0x86, 0xf2, 0x2a, 0x92,
- 0x3c, 0xe2, 0xae, 0xcb, 0x71, 0xbc, 0x30, 0x48, 0x8c, 0x44, 0x78, 0x49, 0x00, 0x06, 0x86, 0xb2,
- 0xe4, 0xb3, 0x73, 0xf1, 0xc9, 0xc7, 0xe4, 0xe0, 0xaa, 0x7c, 0x81, 0x54, 0x3e, 0x40, 0xaa, 0x72,
- 0xf6, 0x25, 0x55, 0x3e, 0xe6, 0x90, 0x38, 0x2e, 0xef, 0x25, 0x95, 0x6b, 0x0e, 0x39, 0xa5, 0x2a,
- 0x35, 0x0f, 0x80, 0x00, 0x49, 0x90, 0x54, 0xf6, 0x98, 0xdb, 0x4c, 0x4f, 0x77, 0x63, 0xa6, 0x07,
- 0xf3, 0xeb, 0xdf, 0x34, 0x00, 0x4f, 0x53, 0x62, 0x19, 0xc4, 0x1d, 0x99, 0x16, 0xdd, 0xd1, 0x7b,
- 0x7d, 0x73, 0x87, 0x5e, 0x38, 0xc4, 0xdb, 0x76, 0x5c, 0x9b, 0xda, 0x68, 0x6d, 0x32, 0xb8, 0xcd,
- 0x06, 0xeb, 0xd7, 0x43, 0xda, 0x7d, 0xf7, 0xc2, 0xa1, 0xf6, 0x8e, 0xe3, 0xda, 0xf6, 0x89, 0xd0,
- 0xaf, 0x5f, 0x0b, 0x0d, 0x73, 0x3f, 0x61, 0x6f, 0x91, 0x51, 0x69, 0xfc, 0x90, 0x5c, 0xf8, 0xa3,
- 0xd7, 0x67, 0x6c, 0x1d, 0xdd, 0xd5, 0x47, 0xfe, 0xf0, 0xd6, 0xa9, 0x6d, 0x9f, 0x0e, 0xc9, 0x0e,
- 0xef, 0xf5, 0xc6, 0x27, 0x3b, 0xd4, 0x1c, 0x11, 0x8f, 0xea, 0x23, 0x47, 0x2a, 0x6c, 0x9c, 0xda,
- 0xa7, 0x36, 0x6f, 0xee, 0xb0, 0x96, 0x90, 0xaa, 0x7f, 0xca, 0x41, 0x0e, 0x93, 0x4f, 0xc7, 0xc4,
- 0xa3, 0x68, 0x17, 0xd2, 0xa4, 0x3f, 0xb0, 0x6b, 0xca, 0x0d, 0xe5, 0x66, 0x71, 0xf7, 0xda, 0xf6,
- 0xd4, 0xe2, 0xb6, 0xa5, 0x5e, 0xbb, 0x3f, 0xb0, 0x3b, 0x09, 0xcc, 0x75, 0xd1, 0x6b, 0x90, 0x39,
- 0x19, 0x8e, 0xbd, 0x41, 0x2d, 0xc9, 0x8d, 0xae, 0xc7, 0x19, 0xdd, 0x61, 0x4a, 0x9d, 0x04, 0x16,
- 0xda, 0xec, 0x51, 0xa6, 0x75, 0x62, 0xd7, 0x52, 0x8b, 0x1f, 0xb5, 0x67, 0x9d, 0xf0, 0x47, 0x31,
- 0x5d, 0xd4, 0x04, 0x30, 0x2d, 0x93, 0x6a, 0xfd, 0x81, 0x6e, 0x5a, 0xb5, 0x34, 0xb7, 0x7c, 0x26,
- 0xde, 0xd2, 0xa4, 0x2d, 0xa6, 0xd8, 0x49, 0xe0, 0x82, 0xe9, 0x77, 0xd8, 0x74, 0x3f, 0x1d, 0x13,
- 0xf7, 0xa2, 0x96, 0x59, 0x3c, 0xdd, 0xf7, 0x98, 0x12, 0x9b, 0x2e, 0xd7, 0x46, 0x6d, 0x28, 0xf6,
- 0xc8, 0xa9, 0x69, 0x69, 0xbd, 0xa1, 0xdd, 0x7f, 0x58, 0xcb, 0x72, 0x63, 0x35, 0xce, 0xb8, 0xc9,
- 0x54, 0x9b, 0x4c, 0xb3, 0x93, 0xc0, 0xd0, 0x0b, 0x7a, 0xe8, 0x47, 0x90, 0xef, 0x0f, 0x48, 0xff,
- 0xa1, 0x46, 0xcf, 0x6b, 0x39, 0xee, 0x63, 0x2b, 0xce, 0x47, 0x8b, 0xe9, 0x75, 0xcf, 0x3b, 0x09,
- 0x9c, 0xeb, 0x8b, 0x26, 0x5b, 0xbf, 0x41, 0x86, 0xe6, 0x19, 0x71, 0x99, 0x7d, 0x7e, 0xf1, 0xfa,
- 0xdf, 0x11, 0x9a, 0xdc, 0x43, 0xc1, 0xf0, 0x3b, 0xe8, 0xa7, 0x50, 0x20, 0x96, 0x21, 0x97, 0x51,
- 0xe0, 0x2e, 0x6e, 0xc4, 0xee, 0xb3, 0x65, 0xf8, 0x8b, 0xc8, 0x13, 0xd9, 0x46, 0x6f, 0x42, 0xb6,
- 0x6f, 0x8f, 0x46, 0x26, 0xad, 0x01, 0xb7, 0xde, 0x8c, 0x5d, 0x00, 0xd7, 0xea, 0x24, 0xb0, 0xd4,
- 0x47, 0x07, 0x50, 0x19, 0x9a, 0x1e, 0xd5, 0x3c, 0x4b, 0x77, 0xbc, 0x81, 0x4d, 0xbd, 0x5a, 0x91,
- 0x7b, 0x78, 0x2e, 0xce, 0xc3, 0xbe, 0xe9, 0xd1, 0x63, 0x5f, 0xb9, 0x93, 0xc0, 0xe5, 0x61, 0x58,
- 0xc0, 0xfc, 0xd9, 0x27, 0x27, 0xc4, 0x0d, 0x1c, 0xd6, 0x4a, 0x8b, 0xfd, 0x1d, 0x32, 0x6d, 0xdf,
- 0x9e, 0xf9, 0xb3, 0xc3, 0x02, 0xf4, 0x21, 0x5c, 0x19, 0xda, 0xba, 0x11, 0xb8, 0xd3, 0xfa, 0x83,
- 0xb1, 0xf5, 0xb0, 0x56, 0xe6, 0x4e, 0x5f, 0x8c, 0x9d, 0xa4, 0xad, 0x1b, 0xbe, 0x8b, 0x16, 0x33,
- 0xe8, 0x24, 0xf0, 0xfa, 0x70, 0x5a, 0x88, 0x1e, 0xc0, 0x86, 0xee, 0x38, 0xc3, 0x8b, 0x69, 0xef,
- 0x15, 0xee, 0xfd, 0x56, 0x9c, 0xf7, 0x06, 0xb3, 0x99, 0x76, 0x8f, 0xf4, 0x19, 0x69, 0x33, 0x07,
- 0x99, 0x33, 0x7d, 0x38, 0x26, 0xea, 0x0b, 0x50, 0x0c, 0x1d, 0x53, 0x54, 0x83, 0xdc, 0x88, 0x78,
- 0x9e, 0x7e, 0x4a, 0xf8, 0xa9, 0x2e, 0x60, 0xbf, 0xab, 0x56, 0xa0, 0x14, 0x3e, 0x9a, 0xea, 0x57,
- 0x4a, 0x60, 0xc9, 0x4e, 0x1d, 0xb3, 0x3c, 0x23, 0xae, 0x67, 0xda, 0x96, 0x6f, 0x29, 0xbb, 0xe8,
- 0x59, 0x28, 0xf3, 0xf7, 0x47, 0xf3, 0xc7, 0xd9, 0xd1, 0x4f, 0xe3, 0x12, 0x17, 0xde, 0x97, 0x4a,
- 0x5b, 0x50, 0x74, 0x76, 0x9d, 0x40, 0x25, 0xc5, 0x55, 0xc0, 0xd9, 0x75, 0x7c, 0x85, 0x67, 0xa0,
- 0xc4, 0x56, 0x1a, 0x68, 0xa4, 0xf9, 0x43, 0x8a, 0x4c, 0x26, 0x55, 0xd4, 0x2f, 0x52, 0x50, 0x9d,
- 0x3e, 0xce, 0xe8, 0x4d, 0x48, 0x33, 0x64, 0x93, 0x20, 0x55, 0xdf, 0x16, 0xb0, 0xb7, 0xed, 0xc3,
- 0xde, 0x76, 0xd7, 0x87, 0xbd, 0x66, 0xfe, 0x9b, 0xef, 0xb6, 0x12, 0x5f, 0xfd, 0x6d, 0x4b, 0xc1,
- 0xdc, 0x02, 0x3d, 0xc5, 0x4e, 0x9f, 0x6e, 0x5a, 0x9a, 0x69, 0xf0, 0x29, 0x17, 0xd8, 0xd1, 0xd2,
- 0x4d, 0x6b, 0xcf, 0x40, 0xfb, 0x50, 0xed, 0xdb, 0x96, 0x47, 0x2c, 0x6f, 0xec, 0x69, 0x02, 0x56,
- 0x25, 0x34, 0x45, 0x0e, 0x98, 0x00, 0xeb, 0x96, 0xaf, 0x79, 0xc4, 0x15, 0xf1, 0x5a, 0x3f, 0x2a,
- 0x40, 0x07, 0x50, 0x3e, 0xd3, 0x87, 0xa6, 0xa1, 0x53, 0xdb, 0xd5, 0x3c, 0x42, 0x25, 0x56, 0x3d,
- 0x3b, 0xb3, 0xcb, 0xf7, 0x7d, 0xad, 0x63, 0x42, 0xef, 0x39, 0x86, 0x4e, 0x49, 0x33, 0xfd, 0xcd,
- 0x77, 0x5b, 0x0a, 0x2e, 0x9d, 0x85, 0x46, 0xd0, 0xf3, 0xb0, 0xa6, 0x3b, 0x8e, 0xe6, 0x51, 0x9d,
- 0x12, 0xad, 0x77, 0x41, 0x89, 0xc7, 0xe1, 0xab, 0x84, 0xcb, 0xba, 0xe3, 0x1c, 0x33, 0x69, 0x93,
- 0x09, 0xd1, 0x73, 0x50, 0x61, 0x48, 0x67, 0xea, 0x43, 0x6d, 0x40, 0xcc, 0xd3, 0x01, 0xe5, 0x40,
- 0x95, 0xc2, 0x65, 0x29, 0xed, 0x70, 0x21, 0xda, 0x86, 0x2b, 0xbe, 0x5a, 0xdf, 0x76, 0x89, 0xaf,
- 0xcb, 0x00, 0xa9, 0x8c, 0xd7, 0xe5, 0x50, 0xcb, 0x76, 0x89, 0xd0, 0x57, 0x8d, 0xe0, 0x4d, 0xe1,
- 0xa8, 0x88, 0x10, 0xa4, 0x0d, 0x9d, 0xea, 0x7c, 0x07, 0x4a, 0x98, 0xb7, 0x99, 0xcc, 0xd1, 0xe9,
- 0x40, 0xc6, 0x95, 0xb7, 0xd1, 0x55, 0xc8, 0x4a, 0xd7, 0x29, 0x3e, 0x0d, 0xd9, 0x43, 0x1b, 0x90,
- 0x71, 0x5c, 0xfb, 0x8c, 0xf0, 0xb0, 0xe4, 0xb1, 0xe8, 0xa8, 0x5f, 0x24, 0x61, 0x7d, 0x06, 0x3f,
- 0x99, 0xdf, 0x81, 0xee, 0x0d, 0xfc, 0x67, 0xb1, 0x36, 0x7a, 0x9d, 0xf9, 0xd5, 0x0d, 0xe2, 0xca,
- 0x9c, 0x53, 0x9b, 0xdd, 0xa2, 0x0e, 0x1f, 0xe7, 0xc1, 0x4c, 0x60, 0xa9, 0x8d, 0x0e, 0xa1, 0x3a,
- 0xd4, 0x3d, 0xaa, 0x09, 0x3c, 0xd2, 0x42, 0xf9, 0x67, 0x16, 0x85, 0xf7, 0x75, 0x1f, 0xc1, 0xd8,
- 0x61, 0x90, 0x8e, 0x2a, 0xc3, 0x88, 0x14, 0x61, 0xd8, 0xe8, 0x5d, 0x7c, 0xae, 0x5b, 0xd4, 0xb4,
- 0x88, 0x16, 0xec, 0x98, 0x57, 0x4b, 0xdf, 0x48, 0xdd, 0x2c, 0xee, 0x3e, 0x35, 0xe3, 0xb4, 0x7d,
- 0x66, 0x1a, 0xc4, 0xea, 0x13, 0xe9, 0xee, 0x4a, 0x60, 0x1c, 0xbc, 0x07, 0x9e, 0x8a, 0xa1, 0x12,
- 0xcd, 0x00, 0xa8, 0x02, 0x49, 0x7a, 0x2e, 0x03, 0x90, 0xa4, 0xe7, 0xe8, 0xff, 0x20, 0xcd, 0x16,
- 0xc9, 0x17, 0x5f, 0x99, 0x93, 0x3a, 0xa5, 0x5d, 0xf7, 0xc2, 0x21, 0x98, 0x6b, 0xaa, 0x6a, 0x70,
- 0x8c, 0x82, 0xac, 0x30, 0xed, 0x55, 0x7d, 0x11, 0xd6, 0xa6, 0x60, 0x3f, 0xb4, 0x7f, 0x4a, 0x78,
- 0xff, 0xd4, 0x35, 0x28, 0x47, 0x30, 0x5e, 0xbd, 0x0a, 0x1b, 0xf3, 0x20, 0x5b, 0x1d, 0x04, 0xf2,
- 0x08, 0xf4, 0xa2, 0xd7, 0x20, 0x1f, 0x60, 0xb6, 0x38, 0xc6, 0xb3, 0xb1, 0xf2, 0x95, 0x71, 0xa0,
- 0xca, 0xce, 0x2f, 0x3b, 0x06, 0xfc, 0x7d, 0x48, 0xf2, 0x89, 0xe7, 0x74, 0xc7, 0xe9, 0xe8, 0xde,
- 0x40, 0xfd, 0x18, 0x6a, 0x71, 0x78, 0x3c, 0xb5, 0x8c, 0x74, 0xf0, 0x1a, 0x5e, 0x85, 0xec, 0x89,
- 0xed, 0x8e, 0x74, 0xca, 0x9d, 0x95, 0xb1, 0xec, 0xb1, 0xd7, 0x53, 0x60, 0x73, 0x8a, 0x8b, 0x45,
- 0x47, 0xd5, 0xe0, 0xa9, 0x58, 0x4c, 0x66, 0x26, 0xa6, 0x65, 0x10, 0x11, 0xcf, 0x32, 0x16, 0x9d,
- 0x89, 0x23, 0x31, 0x59, 0xd1, 0x61, 0x8f, 0xf5, 0xf8, 0x5a, 0xb9, 0xff, 0x02, 0x96, 0x3d, 0xf5,
- 0xb7, 0x79, 0xc8, 0x63, 0xe2, 0x39, 0x0c, 0x4b, 0x50, 0x13, 0x0a, 0xe4, 0xbc, 0x4f, 0x1c, 0xea,
- 0xc3, 0xef, 0x7c, 0xb6, 0x21, 0xb4, 0xdb, 0xbe, 0x26, 0x4b, 0xf5, 0x81, 0x19, 0xba, 0x2d, 0xd9,
- 0x5c, 0x3c, 0x31, 0x93, 0xe6, 0x61, 0x3a, 0xf7, 0xba, 0x4f, 0xe7, 0x52, 0xb1, 0xd9, 0x5d, 0x58,
- 0x4d, 0xf1, 0xb9, 0xdb, 0x92, 0xcf, 0xa5, 0x97, 0x3c, 0x2c, 0x42, 0xe8, 0x5a, 0x11, 0x42, 0x97,
- 0x59, 0xb2, 0xcc, 0x18, 0x46, 0xf7, 0xba, 0xcf, 0xe8, 0xb2, 0x4b, 0x66, 0x3c, 0x45, 0xe9, 0xee,
- 0x44, 0x29, 0x5d, 0x2e, 0x06, 0xa2, 0x7d, 0xeb, 0x58, 0x4e, 0xf7, 0xe3, 0x10, 0xa7, 0xcb, 0xc7,
- 0x12, 0x2a, 0xe1, 0x64, 0x0e, 0xa9, 0x6b, 0x45, 0x48, 0x5d, 0x61, 0x49, 0x0c, 0x62, 0x58, 0xdd,
- 0xdb, 0x61, 0x56, 0x07, 0xb1, 0xc4, 0x50, 0xee, 0xf7, 0x3c, 0x5a, 0xf7, 0x56, 0x40, 0xeb, 0x8a,
- 0xb1, 0xbc, 0x54, 0xae, 0x61, 0x9a, 0xd7, 0x1d, 0xce, 0xf0, 0x3a, 0xc1, 0xc3, 0x9e, 0x8f, 0x75,
- 0xb1, 0x84, 0xd8, 0x1d, 0xce, 0x10, 0xbb, 0xf2, 0x12, 0x87, 0x4b, 0x98, 0xdd, 0x2f, 0xe7, 0x33,
- 0xbb, 0x78, 0xee, 0x25, 0xa7, 0xb9, 0x1a, 0xb5, 0xd3, 0x62, 0xa8, 0xdd, 0x1a, 0x77, 0xff, 0x52,
- 0xac, 0xfb, 0xcb, 0x73, 0xbb, 0x17, 0x59, 0x86, 0x9c, 0x3a, 0xf3, 0x0c, 0x65, 0x88, 0xeb, 0xda,
- 0xae, 0x64, 0x69, 0xa2, 0xa3, 0xde, 0x64, 0x39, 0x7b, 0x72, 0xbe, 0x17, 0xf0, 0x40, 0x8e, 0xe6,
- 0xa1, 0x33, 0xad, 0xfe, 0x55, 0x99, 0xd8, 0xf2, 0x34, 0x17, 0xce, 0xf7, 0x05, 0x99, 0xef, 0x43,
- 0xec, 0x30, 0x19, 0x65, 0x87, 0x5b, 0x50, 0x64, 0x28, 0x3d, 0x45, 0xfc, 0x74, 0x27, 0x20, 0x7e,
- 0xb7, 0x60, 0x9d, 0xa7, 0x61, 0xc1, 0x21, 0x25, 0x34, 0xa7, 0x79, 0x86, 0x59, 0x63, 0x03, 0xe2,
- 0xe5, 0x14, 0x18, 0xfd, 0x0a, 0x5c, 0x09, 0xe9, 0x06, 0xe8, 0x2f, 0xd8, 0x4f, 0x35, 0xd0, 0x6e,
- 0x88, 0x34, 0xf0, 0x6e, 0x3a, 0x6f, 0x54, 0x09, 0xbe, 0x2e, 0xb3, 0xbc, 0x4b, 0x04, 0xb2, 0x68,
- 0x4c, 0x85, 0x18, 0xf2, 0x51, 0xea, 0xdf, 0x93, 0x93, 0x30, 0x4e, 0x68, 0xe5, 0x3c, 0x06, 0xa8,
- 0xfc, 0xd7, 0x0c, 0x30, 0x9c, 0xaa, 0x52, 0x91, 0x54, 0x85, 0x3e, 0x84, 0x8d, 0x08, 0x39, 0xd4,
- 0xc6, 0x9c, 0xf8, 0xd5, 0x8c, 0xcb, 0x71, 0xc4, 0x04, 0x46, 0x67, 0x33, 0x23, 0xe8, 0x23, 0x78,
- 0xda, 0x22, 0xe7, 0x33, 0x8b, 0xf7, 0x9f, 0x41, 0x66, 0xcf, 0xb6, 0xbf, 0x20, 0x97, 0xf0, 0x38,
- 0xec, 0xdb, 0xfd, 0x87, 0xf8, 0x49, 0xe6, 0x23, 0x22, 0x92, 0xee, 0x63, 0x98, 0xe3, 0x49, 0x1c,
- 0x73, 0xfc, 0x97, 0x32, 0x79, 0xb9, 0x02, 0xee, 0xd8, 0xb7, 0x0d, 0x22, 0x13, 0x25, 0x6f, 0xa3,
- 0x2a, 0xa4, 0x86, 0xf6, 0xa9, 0x4c, 0x87, 0xac, 0xc9, 0xb4, 0x82, 0x6c, 0x52, 0x90, 0xc9, 0x22,
- 0xc8, 0xb1, 0x19, 0xfe, 0xaa, 0xc8, 0x1c, 0x5b, 0x85, 0xd4, 0x43, 0x22, 0xb0, 0xbf, 0x84, 0x59,
- 0x93, 0xe9, 0xf1, 0xd3, 0xc2, 0x11, 0xbd, 0x84, 0x45, 0x07, 0xbd, 0x09, 0x05, 0x5e, 0x87, 0xd1,
- 0x6c, 0xc7, 0x93, 0x30, 0xfd, 0x74, 0x38, 0x0c, 0xa2, 0xdc, 0xb2, 0x7d, 0xc4, 0x74, 0x0e, 0x1d,
- 0x0f, 0xe7, 0x1d, 0xd9, 0x0a, 0xd1, 0x87, 0x42, 0x84, 0xc5, 0x5e, 0x83, 0x02, 0x9b, 0xbd, 0xe7,
- 0xe8, 0x7d, 0xc2, 0x31, 0xb7, 0x80, 0x27, 0x02, 0xf5, 0x01, 0xa0, 0xd9, 0xcc, 0x81, 0x3a, 0x90,
- 0x25, 0x67, 0xc4, 0xa2, 0xec, 0xd5, 0x62, 0x14, 0xf1, 0xea, 0x1c, 0x8a, 0x48, 0x2c, 0xda, 0xac,
- 0xb1, 0x0d, 0xfe, 0xc7, 0x77, 0x5b, 0x55, 0xa1, 0xfd, 0xb2, 0x3d, 0x32, 0x29, 0x19, 0x39, 0xf4,
- 0x02, 0x4b, 0x7b, 0xf5, 0x2f, 0x49, 0xc6, 0xd7, 0x22, 0x59, 0x65, 0x6e, 0x6c, 0xfd, 0xb3, 0x9b,
- 0x0c, 0x71, 0xf5, 0xd5, 0xe2, 0xbd, 0x09, 0x70, 0xaa, 0x7b, 0xda, 0x67, 0xba, 0x45, 0x89, 0x21,
- 0x83, 0x1e, 0x92, 0xa0, 0x3a, 0xe4, 0x59, 0x6f, 0xec, 0x11, 0x43, 0x5e, 0x33, 0x82, 0x7e, 0x68,
- 0x9d, 0xb9, 0xc7, 0x5b, 0x67, 0x34, 0xca, 0xf9, 0xa9, 0x28, 0x87, 0xb8, 0x54, 0x21, 0xcc, 0xa5,
- 0xd8, 0xdc, 0x1c, 0xd7, 0xb4, 0x5d, 0x93, 0x5e, 0xf0, 0xad, 0x49, 0xe1, 0xa0, 0xcf, 0x6e, 0xaf,
- 0x23, 0x32, 0x72, 0x6c, 0x7b, 0xa8, 0x09, 0xdc, 0x2c, 0x72, 0xd3, 0x92, 0x14, 0xb6, 0x39, 0x7c,
- 0xfe, 0x2a, 0x84, 0x11, 0x13, 0xce, 0xfc, 0x3f, 0x17, 0x60, 0xf5, 0x9f, 0x49, 0x76, 0x75, 0x88,
- 0xf2, 0x06, 0xf4, 0x01, 0x3c, 0x39, 0x05, 0x95, 0x12, 0x5f, 0x3c, 0xc9, 0x35, 0x57, 0x40, 0xcc,
- 0x27, 0xa2, 0x88, 0x29, 0xf0, 0xc5, 0x0b, 0xad, 0x2b, 0xf5, 0x98, 0xeb, 0x5a, 0x82, 0x84, 0xc6,
- 0x63, 0x22, 0x61, 0x1c, 0x8a, 0x93, 0xcb, 0xde, 0xf4, 0xe7, 0xa0, 0xb8, 0xba, 0xc7, 0xee, 0x80,
- 0x61, 0xb6, 0x35, 0xf7, 0x2d, 0x7b, 0x16, 0xca, 0x2e, 0xa1, 0x6c, 0x61, 0x91, 0x5b, 0x76, 0x49,
- 0x08, 0x25, 0x02, 0x1f, 0xc1, 0x13, 0x73, 0x59, 0x17, 0x7a, 0x03, 0x0a, 0x13, 0xc2, 0xa6, 0xc4,
- 0x5c, 0x58, 0x83, 0x4b, 0xd8, 0x44, 0x57, 0xfd, 0xa3, 0x32, 0x71, 0x19, 0xbd, 0xd6, 0xb5, 0x21,
- 0xeb, 0x12, 0x6f, 0x3c, 0x14, 0x17, 0xad, 0xca, 0xee, 0x2b, 0xab, 0xf1, 0x35, 0x26, 0x1d, 0x0f,
- 0x29, 0x96, 0xc6, 0xea, 0x03, 0xc8, 0x0a, 0x09, 0x2a, 0x42, 0xee, 0xde, 0xc1, 0xdd, 0x83, 0xc3,
- 0xf7, 0x0f, 0xaa, 0x09, 0x04, 0x90, 0x6d, 0xb4, 0x5a, 0xed, 0xa3, 0x6e, 0x55, 0x41, 0x05, 0xc8,
- 0x34, 0x9a, 0x87, 0xb8, 0x5b, 0x4d, 0x32, 0x31, 0x6e, 0xbf, 0xdb, 0x6e, 0x75, 0xab, 0x29, 0xb4,
- 0x0e, 0x65, 0xd1, 0xd6, 0xee, 0x1c, 0xe2, 0x9f, 0x37, 0xba, 0xd5, 0x74, 0x48, 0x74, 0xdc, 0x3e,
- 0x78, 0xa7, 0x8d, 0xab, 0x19, 0xf5, 0x55, 0x76, 0x93, 0x8b, 0x61, 0x78, 0x93, 0x3b, 0x9b, 0x12,
- 0xba, 0xb3, 0xa9, 0xbf, 0x4e, 0x42, 0x3d, 0x9e, 0xb6, 0xa1, 0x77, 0xa7, 0x16, 0xbe, 0x7b, 0x09,
- 0xce, 0x37, 0xb5, 0x7a, 0xf4, 0x1c, 0x54, 0x5c, 0x72, 0x42, 0x68, 0x7f, 0x20, 0x68, 0x24, 0x3b,
- 0x53, 0xa9, 0x9b, 0x65, 0x5c, 0x96, 0x52, 0x6e, 0xe4, 0x09, 0xb5, 0x4f, 0x48, 0x9f, 0x6a, 0x02,
- 0xf2, 0xc4, 0x81, 0x29, 0x30, 0x35, 0x26, 0x3d, 0x16, 0x42, 0xf5, 0xe3, 0x4b, 0xc5, 0xb2, 0x00,
- 0x19, 0xdc, 0xee, 0xe2, 0x0f, 0xaa, 0x29, 0x84, 0xa0, 0xc2, 0x9b, 0xda, 0xf1, 0x41, 0xe3, 0xe8,
- 0xb8, 0x73, 0xc8, 0x62, 0x79, 0x05, 0xd6, 0xfc, 0x58, 0xfa, 0xc2, 0x8c, 0xfa, 0x1b, 0x05, 0x2a,
- 0xd1, 0x62, 0x09, 0x8b, 0xa1, 0x6b, 0x8f, 0x2d, 0x83, 0x47, 0x23, 0x83, 0x45, 0x87, 0xf1, 0xc2,
- 0x4f, 0xc7, 0xb6, 0x3b, 0x1e, 0x85, 0x59, 0x11, 0x08, 0x11, 0x27, 0x46, 0x2f, 0xc0, 0x9a, 0xa0,
- 0x79, 0x9e, 0x79, 0x6a, 0xe9, 0x74, 0xec, 0x8a, 0x02, 0x51, 0x09, 0x57, 0xb8, 0xf8, 0xd8, 0x97,
- 0x32, 0x45, 0x51, 0x0a, 0x9b, 0x28, 0x0a, 0x42, 0x58, 0xe1, 0xe2, 0x40, 0x51, 0xfd, 0x1c, 0x32,
- 0x1c, 0x2e, 0xd8, 0xf1, 0xe1, 0x25, 0x13, 0xc9, 0x60, 0x59, 0x1b, 0x7d, 0x04, 0xa0, 0x53, 0xea,
- 0x9a, 0xbd, 0xb1, 0x00, 0xae, 0xd4, 0xdc, 0x5b, 0x0f, 0xb7, 0x6f, 0xf8, 0x7a, 0xcd, 0x6b, 0x12,
- 0x77, 0x36, 0x26, 0xa6, 0x21, 0xec, 0x09, 0x39, 0x54, 0x0f, 0xa0, 0x12, 0xb5, 0xf5, 0xa9, 0x8a,
- 0x98, 0x43, 0x94, 0xaa, 0x08, 0x0a, 0x2d, 0xa9, 0x4a, 0x40, 0x74, 0x52, 0xa2, 0x3c, 0xc6, 0x3b,
- 0xea, 0x97, 0x0a, 0xe4, 0xbb, 0xe7, 0x72, 0x33, 0x63, 0x2a, 0x33, 0x13, 0xd3, 0x64, 0xb8, 0x0e,
- 0x21, 0x4a, 0x3d, 0xa9, 0xa0, 0x80, 0xf4, 0x76, 0xf0, 0xba, 0xa6, 0x57, 0xbd, 0x6e, 0xfa, 0x95,
- 0x34, 0x79, 0x44, 0x1b, 0x50, 0x08, 0x00, 0x8d, 0x97, 0xf3, 0xec, 0xcf, 0x64, 0x3d, 0x23, 0x85,
- 0x45, 0x07, 0x6d, 0x42, 0xd1, 0x71, 0x6d, 0x8d, 0x9e, 0x8b, 0xed, 0x16, 0x3b, 0xc9, 0x38, 0x58,
- 0xf7, 0x9c, 0x57, 0x6c, 0x7e, 0xa7, 0xc0, 0x5a, 0xe0, 0x43, 0x82, 0xea, 0xff, 0x43, 0xce, 0x19,
- 0xf7, 0x34, 0x3f, 0x4a, 0x53, 0xdf, 0x85, 0x7c, 0x8a, 0x36, 0xee, 0x0d, 0xcd, 0xfe, 0x5d, 0x72,
- 0x21, 0x01, 0x34, 0xeb, 0x8c, 0x7b, 0x77, 0x45, 0x30, 0xc5, 0x34, 0x92, 0x0b, 0xa6, 0x91, 0x9a,
- 0x9a, 0x06, 0x7a, 0x01, 0x4a, 0x96, 0x6d, 0x10, 0x4d, 0x37, 0x0c, 0x97, 0x78, 0x9e, 0x48, 0xd0,
- 0xd2, 0x73, 0x91, 0x8d, 0x34, 0xc4, 0x80, 0xfa, 0xbd, 0x02, 0x68, 0x16, 0xc4, 0xd1, 0x31, 0xac,
- 0x4f, 0xf2, 0x80, 0x9f, 0x05, 0x05, 0x9c, 0xde, 0x88, 0x4f, 0x02, 0x11, 0x1e, 0x5f, 0x3d, 0x8b,
- 0x8a, 0x3d, 0xd4, 0x85, 0x0d, 0x3a, 0x70, 0x89, 0x37, 0xb0, 0x87, 0x86, 0xe6, 0xf0, 0xf5, 0xf2,
- 0xa0, 0x24, 0x57, 0x0c, 0x4a, 0x02, 0xa3, 0xc0, 0x3e, 0x18, 0x59, 0x7a, 0x00, 0x55, 0x07, 0x6a,
- 0xdd, 0x19, 0x33, 0xb9, 0xce, 0xb8, 0x29, 0x29, 0x8f, 0x33, 0x25, 0xf5, 0x36, 0x54, 0xdf, 0x0b,
- 0x9e, 0x2f, 0x9f, 0x34, 0x35, 0x4d, 0x65, 0x66, 0x9a, 0x67, 0x90, 0xbf, 0x6f, 0x53, 0x71, 0x35,
- 0xfd, 0x09, 0x14, 0x82, 0xe8, 0x05, 0x5f, 0x04, 0x62, 0xc3, 0x2e, 0x67, 0x32, 0x31, 0x61, 0x77,
- 0x51, 0x06, 0x22, 0xc4, 0xd0, 0x26, 0xd7, 0x4c, 0x1e, 0xe6, 0x3c, 0x5e, 0x13, 0x03, 0xfb, 0xfe,
- 0x1d, 0x53, 0xfd, 0xb7, 0x02, 0x79, 0xbf, 0x82, 0x8b, 0x5e, 0x0d, 0x21, 0x4a, 0x65, 0x4e, 0xbd,
- 0xcb, 0x57, 0x9c, 0x54, 0x61, 0xa3, 0x73, 0x4d, 0x5e, 0x7e, 0xae, 0x71, 0xe5, 0x74, 0xff, 0x83,
- 0x48, 0xfa, 0xd2, 0x1f, 0x44, 0x5e, 0x06, 0x44, 0x6d, 0xaa, 0x0f, 0xb5, 0x33, 0x9b, 0x9a, 0xd6,
- 0xa9, 0x26, 0xce, 0x8f, 0x60, 0xa2, 0x55, 0x3e, 0x72, 0x9f, 0x0f, 0x1c, 0x31, 0xb9, 0xfa, 0x07,
- 0x05, 0xf2, 0x41, 0xae, 0xbf, 0x6c, 0x51, 0xf5, 0x2a, 0x64, 0x65, 0x3a, 0x13, 0x55, 0x55, 0xd9,
- 0x0b, 0xea, 0xfb, 0xe9, 0x50, 0x7d, 0xbf, 0x0e, 0xf9, 0x11, 0xa1, 0x3a, 0x27, 0x3c, 0x02, 0xd8,
- 0x83, 0x3e, 0x7a, 0x03, 0x6a, 0x71, 0x17, 0x7b, 0xce, 0xe9, 0xca, 0x8c, 0x59, 0x86, 0xe8, 0x1a,
- 0x31, 0x04, 0x11, 0xba, 0xf5, 0x16, 0x14, 0x43, 0x85, 0x71, 0x06, 0xc6, 0x07, 0xed, 0xf7, 0xab,
- 0x89, 0x7a, 0xee, 0xcb, 0xaf, 0x6f, 0xa4, 0x0e, 0xc8, 0x67, 0xa8, 0x06, 0x39, 0xdc, 0x6e, 0x75,
- 0xda, 0xad, 0xbb, 0x55, 0xa5, 0x5e, 0xfc, 0xf2, 0xeb, 0x1b, 0x39, 0x4c, 0x78, 0x91, 0xee, 0x56,
- 0x07, 0x4a, 0xe1, 0xed, 0x8c, 0xa6, 0x52, 0x04, 0x95, 0x77, 0xee, 0x1d, 0xed, 0xef, 0xb5, 0x1a,
- 0xdd, 0xb6, 0x76, 0xff, 0xb0, 0xdb, 0xae, 0x2a, 0xe8, 0x49, 0xb8, 0xb2, 0xbf, 0xf7, 0xb3, 0x4e,
- 0x57, 0x6b, 0xed, 0xef, 0xb5, 0x0f, 0xba, 0x5a, 0xa3, 0xdb, 0x6d, 0xb4, 0xee, 0x56, 0x93, 0xbb,
- 0xbf, 0x2f, 0xc0, 0x5a, 0xa3, 0xd9, 0xda, 0x63, 0x34, 0xc0, 0xec, 0xeb, 0xbc, 0x7e, 0xd3, 0x82,
- 0x34, 0xaf, 0xd0, 0x2c, 0xfc, 0xdc, 0x5e, 0x5f, 0x5c, 0xbe, 0x45, 0x77, 0x20, 0xc3, 0x8b, 0x37,
- 0x68, 0xf1, 0xf7, 0xf7, 0xfa, 0x92, 0x7a, 0x2e, 0x9b, 0x0c, 0x3f, 0x57, 0x0b, 0x3f, 0xc8, 0xd7,
- 0x17, 0x97, 0x77, 0x11, 0x86, 0xc2, 0xe4, 0xce, 0xb4, 0xfc, 0x03, 0x75, 0x7d, 0x85, 0xfc, 0x83,
- 0xf6, 0x21, 0xe7, 0x5f, 0x73, 0x97, 0x7d, 0x32, 0xaf, 0x2f, 0xad, 0xbf, 0xb2, 0x70, 0x89, 0x72,
- 0xc4, 0xe2, 0xef, 0xff, 0xf5, 0x25, 0xc5, 0x64, 0xb4, 0x07, 0x59, 0x49, 0xd0, 0x97, 0x7c, 0x06,
- 0xaf, 0x2f, 0xab, 0xa7, 0xb2, 0xa0, 0x4d, 0x8a, 0x51, 0xcb, 0xff, 0x6a, 0xa8, 0xaf, 0x50, 0x27,
- 0x47, 0xf7, 0x00, 0x42, 0xc5, 0x87, 0x15, 0x7e, 0x57, 0xa8, 0xaf, 0x52, 0xff, 0x46, 0x87, 0x90,
- 0x0f, 0xee, 0x82, 0x4b, 0x7f, 0x1e, 0xa8, 0x2f, 0x2f, 0x44, 0xa3, 0x07, 0x50, 0x8e, 0x5e, 0x4e,
- 0x56, 0xfb, 0x25, 0xa0, 0xbe, 0x62, 0x85, 0x99, 0xf9, 0x8f, 0xde, 0x54, 0x56, 0xfb, 0x45, 0xa0,
- 0xbe, 0x62, 0xc1, 0x19, 0x7d, 0x02, 0xeb, 0xb3, 0x37, 0x89, 0xd5, 0xff, 0x18, 0xa8, 0x5f, 0xa2,
- 0x04, 0x8d, 0x46, 0x80, 0xe6, 0xdc, 0x40, 0x2e, 0xf1, 0x03, 0x41, 0xfd, 0x32, 0x15, 0xe9, 0x66,
- 0xfb, 0x9b, 0x1f, 0x36, 0x95, 0x6f, 0x7f, 0xd8, 0x54, 0xbe, 0xff, 0x61, 0x53, 0xf9, 0xea, 0xd1,
- 0x66, 0xe2, 0xdb, 0x47, 0x9b, 0x89, 0x3f, 0x3f, 0xda, 0x4c, 0xfc, 0xe2, 0xa5, 0x53, 0x93, 0x0e,
- 0xc6, 0xbd, 0xed, 0xbe, 0x3d, 0xda, 0x09, 0xff, 0x99, 0x34, 0xef, 0x6f, 0xa9, 0x5e, 0x96, 0x67,
- 0xa3, 0xdb, 0xff, 0x09, 0x00, 0x00, 0xff, 0xff, 0x1d, 0xb3, 0x11, 0x48, 0x4d, 0x25, 0x00, 0x00,
+func (m *ResponseProcessProposal) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
}
-
-// Reference imports to suppress errors if they are not otherwise used.
-var _ context.Context
-var _ grpc.ClientConn
-
-// This is a compile-time assertion to ensure that this generated file
-// is compatible with the grpc package it is being compiled against.
-const _ = grpc.SupportPackageIsVersion4
-
-// ABCIApplicationClient is the client API for ABCIApplication service.
-//
-// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
-type ABCIApplicationClient interface {
- Echo(ctx context.Context, in *RequestEcho, opts ...grpc.CallOption) (*ResponseEcho, error)
- Flush(ctx context.Context, in *RequestFlush, opts ...grpc.CallOption) (*ResponseFlush, error)
- Info(ctx context.Context, in *RequestInfo, opts ...grpc.CallOption) (*ResponseInfo, error)
- DeliverTx(ctx context.Context, in *RequestDeliverTx, opts ...grpc.CallOption) (*ResponseDeliverTx, error)
- CheckTx(ctx context.Context, in *RequestCheckTx, opts ...grpc.CallOption) (*ResponseCheckTx, error)
- Query(ctx context.Context, in *RequestQuery, opts ...grpc.CallOption) (*ResponseQuery, error)
- Commit(ctx context.Context, in *RequestCommit, opts ...grpc.CallOption) (*ResponseCommit, error)
- InitChain(ctx context.Context, in *RequestInitChain, opts ...grpc.CallOption) (*ResponseInitChain, error)
- BeginBlock(ctx context.Context, in *RequestBeginBlock, opts ...grpc.CallOption) (*ResponseBeginBlock, error)
- EndBlock(ctx context.Context, in *RequestEndBlock, opts ...grpc.CallOption) (*ResponseEndBlock, error)
- ListSnapshots(ctx context.Context, in *RequestListSnapshots, opts ...grpc.CallOption) (*ResponseListSnapshots, error)
- OfferSnapshot(ctx context.Context, in *RequestOfferSnapshot, opts ...grpc.CallOption) (*ResponseOfferSnapshot, error)
- LoadSnapshotChunk(ctx context.Context, in *RequestLoadSnapshotChunk, opts ...grpc.CallOption) (*ResponseLoadSnapshotChunk, error)
- ApplySnapshotChunk(ctx context.Context, in *RequestApplySnapshotChunk, opts ...grpc.CallOption) (*ResponseApplySnapshotChunk, error)
+func (m *ResponseProcessProposal) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_ResponseProcessProposal.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+ }
}
-
-type aBCIApplicationClient struct {
- cc *grpc.ClientConn
+func (m *ResponseProcessProposal) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ResponseProcessProposal.Merge(m, src)
}
-
-func NewABCIApplicationClient(cc *grpc.ClientConn) ABCIApplicationClient {
- return &aBCIApplicationClient{cc}
+func (m *ResponseProcessProposal) XXX_Size() int {
+ return m.Size()
}
-
-func (c *aBCIApplicationClient) Echo(ctx context.Context, in *RequestEcho, opts ...grpc.CallOption) (*ResponseEcho, error) {
- out := new(ResponseEcho)
- err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/Echo", in, out, opts...)
- if err != nil {
- return nil, err
- }
- return out, nil
+func (m *ResponseProcessProposal) XXX_DiscardUnknown() {
+ xxx_messageInfo_ResponseProcessProposal.DiscardUnknown(m)
}
-func (c *aBCIApplicationClient) Flush(ctx context.Context, in *RequestFlush, opts ...grpc.CallOption) (*ResponseFlush, error) {
- out := new(ResponseFlush)
- err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/Flush", in, out, opts...)
- if err != nil {
- return nil, err
+var xxx_messageInfo_ResponseProcessProposal proto.InternalMessageInfo
+
+func (m *ResponseProcessProposal) GetStatus() ResponseProcessProposal_ProposalStatus {
+ if m != nil {
+ return m.Status
}
- return out, nil
+ return ResponseProcessProposal_UNKNOWN
}
-func (c *aBCIApplicationClient) Info(ctx context.Context, in *RequestInfo, opts ...grpc.CallOption) (*ResponseInfo, error) {
- out := new(ResponseInfo)
- err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/Info", in, out, opts...)
- if err != nil {
- return nil, err
+func (m *ResponseProcessProposal) GetAppHash() []byte {
+ if m != nil {
+ return m.AppHash
}
- return out, nil
+ return nil
}
-func (c *aBCIApplicationClient) DeliverTx(ctx context.Context, in *RequestDeliverTx, opts ...grpc.CallOption) (*ResponseDeliverTx, error) {
- out := new(ResponseDeliverTx)
- err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/DeliverTx", in, out, opts...)
- if err != nil {
- return nil, err
+func (m *ResponseProcessProposal) GetTxResults() []*ExecTxResult {
+ if m != nil {
+ return m.TxResults
}
- return out, nil
+ return nil
}
-func (c *aBCIApplicationClient) CheckTx(ctx context.Context, in *RequestCheckTx, opts ...grpc.CallOption) (*ResponseCheckTx, error) {
- out := new(ResponseCheckTx)
- err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/CheckTx", in, out, opts...)
- if err != nil {
- return nil, err
+func (m *ResponseProcessProposal) GetValidatorUpdates() []*ValidatorUpdate {
+ if m != nil {
+ return m.ValidatorUpdates
}
- return out, nil
+ return nil
}
-func (c *aBCIApplicationClient) Query(ctx context.Context, in *RequestQuery, opts ...grpc.CallOption) (*ResponseQuery, error) {
- out := new(ResponseQuery)
- err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/Query", in, out, opts...)
- if err != nil {
- return nil, err
+func (m *ResponseProcessProposal) GetConsensusParamUpdates() *types1.ConsensusParams {
+ if m != nil {
+ return m.ConsensusParamUpdates
}
- return out, nil
+ return nil
}
-func (c *aBCIApplicationClient) Commit(ctx context.Context, in *RequestCommit, opts ...grpc.CallOption) (*ResponseCommit, error) {
- out := new(ResponseCommit)
- err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/Commit", in, out, opts...)
- if err != nil {
- return nil, err
- }
- return out, nil
+type ResponseExtendVote struct {
+ VoteExtension []byte `protobuf:"bytes,1,opt,name=vote_extension,json=voteExtension,proto3" json:"vote_extension,omitempty"`
}
-func (c *aBCIApplicationClient) InitChain(ctx context.Context, in *RequestInitChain, opts ...grpc.CallOption) (*ResponseInitChain, error) {
- out := new(ResponseInitChain)
- err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/InitChain", in, out, opts...)
- if err != nil {
- return nil, err
- }
- return out, nil
+func (m *ResponseExtendVote) Reset() { *m = ResponseExtendVote{} }
+func (m *ResponseExtendVote) String() string { return proto.CompactTextString(m) }
+func (*ResponseExtendVote) ProtoMessage() {}
+func (*ResponseExtendVote) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{38}
}
-
-func (c *aBCIApplicationClient) BeginBlock(ctx context.Context, in *RequestBeginBlock, opts ...grpc.CallOption) (*ResponseBeginBlock, error) {
- out := new(ResponseBeginBlock)
- err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/BeginBlock", in, out, opts...)
- if err != nil {
- return nil, err
- }
- return out, nil
+func (m *ResponseExtendVote) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
}
-
-func (c *aBCIApplicationClient) EndBlock(ctx context.Context, in *RequestEndBlock, opts ...grpc.CallOption) (*ResponseEndBlock, error) {
- out := new(ResponseEndBlock)
- err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/EndBlock", in, out, opts...)
- if err != nil {
- return nil, err
+func (m *ResponseExtendVote) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_ResponseExtendVote.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
}
- return out, nil
}
-
-func (c *aBCIApplicationClient) ListSnapshots(ctx context.Context, in *RequestListSnapshots, opts ...grpc.CallOption) (*ResponseListSnapshots, error) {
- out := new(ResponseListSnapshots)
- err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/ListSnapshots", in, out, opts...)
- if err != nil {
- return nil, err
- }
- return out, nil
+func (m *ResponseExtendVote) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ResponseExtendVote.Merge(m, src)
}
-
-func (c *aBCIApplicationClient) OfferSnapshot(ctx context.Context, in *RequestOfferSnapshot, opts ...grpc.CallOption) (*ResponseOfferSnapshot, error) {
- out := new(ResponseOfferSnapshot)
- err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/OfferSnapshot", in, out, opts...)
- if err != nil {
- return nil, err
- }
- return out, nil
+func (m *ResponseExtendVote) XXX_Size() int {
+ return m.Size()
}
-
-func (c *aBCIApplicationClient) LoadSnapshotChunk(ctx context.Context, in *RequestLoadSnapshotChunk, opts ...grpc.CallOption) (*ResponseLoadSnapshotChunk, error) {
- out := new(ResponseLoadSnapshotChunk)
- err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/LoadSnapshotChunk", in, out, opts...)
- if err != nil {
- return nil, err
- }
- return out, nil
+func (m *ResponseExtendVote) XXX_DiscardUnknown() {
+ xxx_messageInfo_ResponseExtendVote.DiscardUnknown(m)
}
-func (c *aBCIApplicationClient) ApplySnapshotChunk(ctx context.Context, in *RequestApplySnapshotChunk, opts ...grpc.CallOption) (*ResponseApplySnapshotChunk, error) {
- out := new(ResponseApplySnapshotChunk)
- err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/ApplySnapshotChunk", in, out, opts...)
- if err != nil {
- return nil, err
- }
- return out, nil
-}
+var xxx_messageInfo_ResponseExtendVote proto.InternalMessageInfo
-// ABCIApplicationServer is the server API for ABCIApplication service.
-type ABCIApplicationServer interface {
- Echo(context.Context, *RequestEcho) (*ResponseEcho, error)
- Flush(context.Context, *RequestFlush) (*ResponseFlush, error)
- Info(context.Context, *RequestInfo) (*ResponseInfo, error)
- DeliverTx(context.Context, *RequestDeliverTx) (*ResponseDeliverTx, error)
- CheckTx(context.Context, *RequestCheckTx) (*ResponseCheckTx, error)
- Query(context.Context, *RequestQuery) (*ResponseQuery, error)
- Commit(context.Context, *RequestCommit) (*ResponseCommit, error)
- InitChain(context.Context, *RequestInitChain) (*ResponseInitChain, error)
- BeginBlock(context.Context, *RequestBeginBlock) (*ResponseBeginBlock, error)
- EndBlock(context.Context, *RequestEndBlock) (*ResponseEndBlock, error)
- ListSnapshots(context.Context, *RequestListSnapshots) (*ResponseListSnapshots, error)
- OfferSnapshot(context.Context, *RequestOfferSnapshot) (*ResponseOfferSnapshot, error)
- LoadSnapshotChunk(context.Context, *RequestLoadSnapshotChunk) (*ResponseLoadSnapshotChunk, error)
- ApplySnapshotChunk(context.Context, *RequestApplySnapshotChunk) (*ResponseApplySnapshotChunk, error)
+func (m *ResponseExtendVote) GetVoteExtension() []byte {
+ if m != nil {
+ return m.VoteExtension
+ }
+ return nil
}
-// UnimplementedABCIApplicationServer can be embedded to have forward compatible implementations.
-type UnimplementedABCIApplicationServer struct {
+type ResponseVerifyVoteExtension struct {
+ Status ResponseVerifyVoteExtension_VerifyStatus `protobuf:"varint,1,opt,name=status,proto3,enum=tendermint.abci.ResponseVerifyVoteExtension_VerifyStatus" json:"status,omitempty"`
}
-func (*UnimplementedABCIApplicationServer) Echo(ctx context.Context, req *RequestEcho) (*ResponseEcho, error) {
- return nil, status.Errorf(codes.Unimplemented, "method Echo not implemented")
+func (m *ResponseVerifyVoteExtension) Reset() { *m = ResponseVerifyVoteExtension{} }
+func (m *ResponseVerifyVoteExtension) String() string { return proto.CompactTextString(m) }
+func (*ResponseVerifyVoteExtension) ProtoMessage() {}
+func (*ResponseVerifyVoteExtension) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{39}
}
-func (*UnimplementedABCIApplicationServer) Flush(ctx context.Context, req *RequestFlush) (*ResponseFlush, error) {
- return nil, status.Errorf(codes.Unimplemented, "method Flush not implemented")
+func (m *ResponseVerifyVoteExtension) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
}
-func (*UnimplementedABCIApplicationServer) Info(ctx context.Context, req *RequestInfo) (*ResponseInfo, error) {
- return nil, status.Errorf(codes.Unimplemented, "method Info not implemented")
+func (m *ResponseVerifyVoteExtension) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_ResponseVerifyVoteExtension.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+ }
}
-func (*UnimplementedABCIApplicationServer) DeliverTx(ctx context.Context, req *RequestDeliverTx) (*ResponseDeliverTx, error) {
- return nil, status.Errorf(codes.Unimplemented, "method DeliverTx not implemented")
+func (m *ResponseVerifyVoteExtension) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ResponseVerifyVoteExtension.Merge(m, src)
}
-func (*UnimplementedABCIApplicationServer) CheckTx(ctx context.Context, req *RequestCheckTx) (*ResponseCheckTx, error) {
- return nil, status.Errorf(codes.Unimplemented, "method CheckTx not implemented")
+func (m *ResponseVerifyVoteExtension) XXX_Size() int {
+ return m.Size()
}
-func (*UnimplementedABCIApplicationServer) Query(ctx context.Context, req *RequestQuery) (*ResponseQuery, error) {
- return nil, status.Errorf(codes.Unimplemented, "method Query not implemented")
+func (m *ResponseVerifyVoteExtension) XXX_DiscardUnknown() {
+ xxx_messageInfo_ResponseVerifyVoteExtension.DiscardUnknown(m)
}
-func (*UnimplementedABCIApplicationServer) Commit(ctx context.Context, req *RequestCommit) (*ResponseCommit, error) {
- return nil, status.Errorf(codes.Unimplemented, "method Commit not implemented")
+
+var xxx_messageInfo_ResponseVerifyVoteExtension proto.InternalMessageInfo
+
+func (m *ResponseVerifyVoteExtension) GetStatus() ResponseVerifyVoteExtension_VerifyStatus {
+ if m != nil {
+ return m.Status
+ }
+ return ResponseVerifyVoteExtension_UNKNOWN
}
-func (*UnimplementedABCIApplicationServer) InitChain(ctx context.Context, req *RequestInitChain) (*ResponseInitChain, error) {
- return nil, status.Errorf(codes.Unimplemented, "method InitChain not implemented")
+
+type ResponseFinalizeBlock struct {
+ Events []Event `protobuf:"bytes,1,rep,name=events,proto3" json:"events,omitempty"`
+ TxResults []*ExecTxResult `protobuf:"bytes,2,rep,name=tx_results,json=txResults,proto3" json:"tx_results,omitempty"`
+ ConsensusParamUpdates *types1.ConsensusParams `protobuf:"bytes,4,opt,name=consensus_param_updates,json=consensusParamUpdates,proto3" json:"consensus_param_updates,omitempty"`
+ AppHash []byte `protobuf:"bytes,5,opt,name=app_hash,json=appHash,proto3" json:"app_hash,omitempty"`
+ RetainHeight int64 `protobuf:"varint,6,opt,name=retain_height,json=retainHeight,proto3" json:"retain_height,omitempty"`
+ NextCoreChainLockUpdate *types1.CoreChainLock `protobuf:"bytes,100,opt,name=next_core_chain_lock_update,json=nextCoreChainLockUpdate,proto3" json:"next_core_chain_lock_update,omitempty"`
+ ValidatorSetUpdate *ValidatorSetUpdate `protobuf:"bytes,101,opt,name=validator_set_update,json=validatorSetUpdate,proto3" json:"validator_set_update,omitempty"`
}
-func (*UnimplementedABCIApplicationServer) BeginBlock(ctx context.Context, req *RequestBeginBlock) (*ResponseBeginBlock, error) {
- return nil, status.Errorf(codes.Unimplemented, "method BeginBlock not implemented")
+
+func (m *ResponseFinalizeBlock) Reset() { *m = ResponseFinalizeBlock{} }
+func (m *ResponseFinalizeBlock) String() string { return proto.CompactTextString(m) }
+func (*ResponseFinalizeBlock) ProtoMessage() {}
+func (*ResponseFinalizeBlock) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{40}
}
-func (*UnimplementedABCIApplicationServer) EndBlock(ctx context.Context, req *RequestEndBlock) (*ResponseEndBlock, error) {
- return nil, status.Errorf(codes.Unimplemented, "method EndBlock not implemented")
+func (m *ResponseFinalizeBlock) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
}
-func (*UnimplementedABCIApplicationServer) ListSnapshots(ctx context.Context, req *RequestListSnapshots) (*ResponseListSnapshots, error) {
- return nil, status.Errorf(codes.Unimplemented, "method ListSnapshots not implemented")
+func (m *ResponseFinalizeBlock) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_ResponseFinalizeBlock.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+ }
}
-func (*UnimplementedABCIApplicationServer) OfferSnapshot(ctx context.Context, req *RequestOfferSnapshot) (*ResponseOfferSnapshot, error) {
- return nil, status.Errorf(codes.Unimplemented, "method OfferSnapshot not implemented")
+func (m *ResponseFinalizeBlock) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ResponseFinalizeBlock.Merge(m, src)
}
-func (*UnimplementedABCIApplicationServer) LoadSnapshotChunk(ctx context.Context, req *RequestLoadSnapshotChunk) (*ResponseLoadSnapshotChunk, error) {
- return nil, status.Errorf(codes.Unimplemented, "method LoadSnapshotChunk not implemented")
+func (m *ResponseFinalizeBlock) XXX_Size() int {
+ return m.Size()
}
-func (*UnimplementedABCIApplicationServer) ApplySnapshotChunk(ctx context.Context, req *RequestApplySnapshotChunk) (*ResponseApplySnapshotChunk, error) {
- return nil, status.Errorf(codes.Unimplemented, "method ApplySnapshotChunk not implemented")
+func (m *ResponseFinalizeBlock) XXX_DiscardUnknown() {
+ xxx_messageInfo_ResponseFinalizeBlock.DiscardUnknown(m)
}
-func RegisterABCIApplicationServer(s *grpc.Server, srv ABCIApplicationServer) {
- s.RegisterService(&_ABCIApplication_serviceDesc, srv)
-}
+var xxx_messageInfo_ResponseFinalizeBlock proto.InternalMessageInfo
-func _ABCIApplication_Echo_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
- in := new(RequestEcho)
- if err := dec(in); err != nil {
- return nil, err
- }
- if interceptor == nil {
- return srv.(ABCIApplicationServer).Echo(ctx, in)
- }
- info := &grpc.UnaryServerInfo{
- Server: srv,
- FullMethod: "/tendermint.abci.ABCIApplication/Echo",
- }
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
- return srv.(ABCIApplicationServer).Echo(ctx, req.(*RequestEcho))
+func (m *ResponseFinalizeBlock) GetEvents() []Event {
+ if m != nil {
+ return m.Events
}
- return interceptor(ctx, in, info, handler)
+ return nil
}
-func _ABCIApplication_Flush_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
- in := new(RequestFlush)
- if err := dec(in); err != nil {
- return nil, err
- }
- if interceptor == nil {
- return srv.(ABCIApplicationServer).Flush(ctx, in)
- }
- info := &grpc.UnaryServerInfo{
- Server: srv,
- FullMethod: "/tendermint.abci.ABCIApplication/Flush",
- }
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
- return srv.(ABCIApplicationServer).Flush(ctx, req.(*RequestFlush))
+func (m *ResponseFinalizeBlock) GetTxResults() []*ExecTxResult {
+ if m != nil {
+ return m.TxResults
}
- return interceptor(ctx, in, info, handler)
+ return nil
}
-func _ABCIApplication_Info_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
- in := new(RequestInfo)
- if err := dec(in); err != nil {
- return nil, err
- }
- if interceptor == nil {
- return srv.(ABCIApplicationServer).Info(ctx, in)
- }
- info := &grpc.UnaryServerInfo{
- Server: srv,
- FullMethod: "/tendermint.abci.ABCIApplication/Info",
- }
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
- return srv.(ABCIApplicationServer).Info(ctx, req.(*RequestInfo))
+func (m *ResponseFinalizeBlock) GetConsensusParamUpdates() *types1.ConsensusParams {
+ if m != nil {
+ return m.ConsensusParamUpdates
}
- return interceptor(ctx, in, info, handler)
+ return nil
}
-func _ABCIApplication_DeliverTx_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
- in := new(RequestDeliverTx)
- if err := dec(in); err != nil {
- return nil, err
- }
- if interceptor == nil {
- return srv.(ABCIApplicationServer).DeliverTx(ctx, in)
- }
- info := &grpc.UnaryServerInfo{
- Server: srv,
- FullMethod: "/tendermint.abci.ABCIApplication/DeliverTx",
- }
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
- return srv.(ABCIApplicationServer).DeliverTx(ctx, req.(*RequestDeliverTx))
+func (m *ResponseFinalizeBlock) GetAppHash() []byte {
+ if m != nil {
+ return m.AppHash
}
- return interceptor(ctx, in, info, handler)
+ return nil
}
-func _ABCIApplication_CheckTx_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
- in := new(RequestCheckTx)
- if err := dec(in); err != nil {
- return nil, err
- }
- if interceptor == nil {
- return srv.(ABCIApplicationServer).CheckTx(ctx, in)
- }
- info := &grpc.UnaryServerInfo{
- Server: srv,
- FullMethod: "/tendermint.abci.ABCIApplication/CheckTx",
- }
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
- return srv.(ABCIApplicationServer).CheckTx(ctx, req.(*RequestCheckTx))
+func (m *ResponseFinalizeBlock) GetRetainHeight() int64 {
+ if m != nil {
+ return m.RetainHeight
}
- return interceptor(ctx, in, info, handler)
+ return 0
}
-func _ABCIApplication_Query_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
- in := new(RequestQuery)
- if err := dec(in); err != nil {
- return nil, err
- }
- if interceptor == nil {
- return srv.(ABCIApplicationServer).Query(ctx, in)
- }
- info := &grpc.UnaryServerInfo{
- Server: srv,
- FullMethod: "/tendermint.abci.ABCIApplication/Query",
- }
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
- return srv.(ABCIApplicationServer).Query(ctx, req.(*RequestQuery))
+func (m *ResponseFinalizeBlock) GetNextCoreChainLockUpdate() *types1.CoreChainLock {
+ if m != nil {
+ return m.NextCoreChainLockUpdate
}
- return interceptor(ctx, in, info, handler)
+ return nil
}
-func _ABCIApplication_Commit_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
- in := new(RequestCommit)
- if err := dec(in); err != nil {
- return nil, err
- }
- if interceptor == nil {
- return srv.(ABCIApplicationServer).Commit(ctx, in)
- }
- info := &grpc.UnaryServerInfo{
- Server: srv,
- FullMethod: "/tendermint.abci.ABCIApplication/Commit",
- }
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
- return srv.(ABCIApplicationServer).Commit(ctx, req.(*RequestCommit))
+func (m *ResponseFinalizeBlock) GetValidatorSetUpdate() *ValidatorSetUpdate {
+ if m != nil {
+ return m.ValidatorSetUpdate
}
- return interceptor(ctx, in, info, handler)
+ return nil
}
-func _ABCIApplication_InitChain_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
- in := new(RequestInitChain)
- if err := dec(in); err != nil {
- return nil, err
- }
- if interceptor == nil {
- return srv.(ABCIApplicationServer).InitChain(ctx, in)
- }
- info := &grpc.UnaryServerInfo{
- Server: srv,
- FullMethod: "/tendermint.abci.ABCIApplication/InitChain",
- }
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
- return srv.(ABCIApplicationServer).InitChain(ctx, req.(*RequestInitChain))
- }
- return interceptor(ctx, in, info, handler)
+type CommitInfo struct {
+ Round int32 `protobuf:"varint,1,opt,name=round,proto3" json:"round,omitempty"`
+ QuorumHash []byte `protobuf:"bytes,3,opt,name=quorum_hash,json=quorumHash,proto3" json:"quorum_hash,omitempty"`
+ BlockSignature []byte `protobuf:"bytes,4,opt,name=block_signature,json=blockSignature,proto3" json:"block_signature,omitempty"`
+ StateSignature []byte `protobuf:"bytes,5,opt,name=state_signature,json=stateSignature,proto3" json:"state_signature,omitempty"`
}
-func _ABCIApplication_BeginBlock_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
- in := new(RequestBeginBlock)
- if err := dec(in); err != nil {
- return nil, err
- }
- if interceptor == nil {
- return srv.(ABCIApplicationServer).BeginBlock(ctx, in)
- }
- info := &grpc.UnaryServerInfo{
- Server: srv,
- FullMethod: "/tendermint.abci.ABCIApplication/BeginBlock",
- }
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
- return srv.(ABCIApplicationServer).BeginBlock(ctx, req.(*RequestBeginBlock))
- }
- return interceptor(ctx, in, info, handler)
+func (m *CommitInfo) Reset() { *m = CommitInfo{} }
+func (m *CommitInfo) String() string { return proto.CompactTextString(m) }
+func (*CommitInfo) ProtoMessage() {}
+func (*CommitInfo) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{41}
}
-
-func _ABCIApplication_EndBlock_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
- in := new(RequestEndBlock)
- if err := dec(in); err != nil {
- return nil, err
- }
- if interceptor == nil {
- return srv.(ABCIApplicationServer).EndBlock(ctx, in)
- }
- info := &grpc.UnaryServerInfo{
- Server: srv,
- FullMethod: "/tendermint.abci.ABCIApplication/EndBlock",
- }
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
- return srv.(ABCIApplicationServer).EndBlock(ctx, req.(*RequestEndBlock))
+func (m *CommitInfo) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *CommitInfo) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_CommitInfo.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
}
- return interceptor(ctx, in, info, handler)
+}
+func (m *CommitInfo) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_CommitInfo.Merge(m, src)
+}
+func (m *CommitInfo) XXX_Size() int {
+ return m.Size()
+}
+func (m *CommitInfo) XXX_DiscardUnknown() {
+ xxx_messageInfo_CommitInfo.DiscardUnknown(m)
}
-func _ABCIApplication_ListSnapshots_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
- in := new(RequestListSnapshots)
- if err := dec(in); err != nil {
- return nil, err
- }
- if interceptor == nil {
- return srv.(ABCIApplicationServer).ListSnapshots(ctx, in)
- }
- info := &grpc.UnaryServerInfo{
- Server: srv,
- FullMethod: "/tendermint.abci.ABCIApplication/ListSnapshots",
- }
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
- return srv.(ABCIApplicationServer).ListSnapshots(ctx, req.(*RequestListSnapshots))
+var xxx_messageInfo_CommitInfo proto.InternalMessageInfo
+
+func (m *CommitInfo) GetRound() int32 {
+ if m != nil {
+ return m.Round
}
- return interceptor(ctx, in, info, handler)
+ return 0
}
-func _ABCIApplication_OfferSnapshot_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
- in := new(RequestOfferSnapshot)
- if err := dec(in); err != nil {
- return nil, err
- }
- if interceptor == nil {
- return srv.(ABCIApplicationServer).OfferSnapshot(ctx, in)
- }
- info := &grpc.UnaryServerInfo{
- Server: srv,
- FullMethod: "/tendermint.abci.ABCIApplication/OfferSnapshot",
- }
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
- return srv.(ABCIApplicationServer).OfferSnapshot(ctx, req.(*RequestOfferSnapshot))
+func (m *CommitInfo) GetQuorumHash() []byte {
+ if m != nil {
+ return m.QuorumHash
}
- return interceptor(ctx, in, info, handler)
+ return nil
}
-func _ABCIApplication_LoadSnapshotChunk_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
- in := new(RequestLoadSnapshotChunk)
- if err := dec(in); err != nil {
- return nil, err
- }
- if interceptor == nil {
- return srv.(ABCIApplicationServer).LoadSnapshotChunk(ctx, in)
- }
- info := &grpc.UnaryServerInfo{
- Server: srv,
- FullMethod: "/tendermint.abci.ABCIApplication/LoadSnapshotChunk",
- }
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
- return srv.(ABCIApplicationServer).LoadSnapshotChunk(ctx, req.(*RequestLoadSnapshotChunk))
+func (m *CommitInfo) GetBlockSignature() []byte {
+ if m != nil {
+ return m.BlockSignature
}
- return interceptor(ctx, in, info, handler)
+ return nil
}
-func _ABCIApplication_ApplySnapshotChunk_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
- in := new(RequestApplySnapshotChunk)
- if err := dec(in); err != nil {
- return nil, err
- }
- if interceptor == nil {
- return srv.(ABCIApplicationServer).ApplySnapshotChunk(ctx, in)
- }
- info := &grpc.UnaryServerInfo{
- Server: srv,
- FullMethod: "/tendermint.abci.ABCIApplication/ApplySnapshotChunk",
- }
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
- return srv.(ABCIApplicationServer).ApplySnapshotChunk(ctx, req.(*RequestApplySnapshotChunk))
+func (m *CommitInfo) GetStateSignature() []byte {
+ if m != nil {
+ return m.StateSignature
}
- return interceptor(ctx, in, info, handler)
+ return nil
}
-var _ABCIApplication_serviceDesc = grpc.ServiceDesc{
- ServiceName: "tendermint.abci.ABCIApplication",
- HandlerType: (*ABCIApplicationServer)(nil),
- Methods: []grpc.MethodDesc{
- {
- MethodName: "Echo",
- Handler: _ABCIApplication_Echo_Handler,
- },
- {
- MethodName: "Flush",
- Handler: _ABCIApplication_Flush_Handler,
- },
- {
- MethodName: "Info",
- Handler: _ABCIApplication_Info_Handler,
- },
- {
- MethodName: "DeliverTx",
- Handler: _ABCIApplication_DeliverTx_Handler,
- },
- {
- MethodName: "CheckTx",
- Handler: _ABCIApplication_CheckTx_Handler,
- },
- {
- MethodName: "Query",
- Handler: _ABCIApplication_Query_Handler,
- },
- {
- MethodName: "Commit",
- Handler: _ABCIApplication_Commit_Handler,
- },
- {
- MethodName: "InitChain",
- Handler: _ABCIApplication_InitChain_Handler,
- },
- {
- MethodName: "BeginBlock",
- Handler: _ABCIApplication_BeginBlock_Handler,
- },
- {
- MethodName: "EndBlock",
- Handler: _ABCIApplication_EndBlock_Handler,
- },
- {
- MethodName: "ListSnapshots",
- Handler: _ABCIApplication_ListSnapshots_Handler,
- },
- {
- MethodName: "OfferSnapshot",
- Handler: _ABCIApplication_OfferSnapshot_Handler,
- },
- {
- MethodName: "LoadSnapshotChunk",
- Handler: _ABCIApplication_LoadSnapshotChunk_Handler,
- },
- {
- MethodName: "ApplySnapshotChunk",
- Handler: _ABCIApplication_ApplySnapshotChunk_Handler,
- },
- },
- Streams: []grpc.StreamDesc{},
- Metadata: "tendermint/abci/types.proto",
+// ExtendedCommitInfo is similar to CommitInfo except that it is only used in
+// the PrepareProposal request such that Tendermint can provide vote extensions
+// to the application.
+type ExtendedCommitInfo struct {
+ // The round at which the block proposer decided in the previous height.
+ Round int32 `protobuf:"varint,1,opt,name=round,proto3" json:"round,omitempty"`
+ // List of validators' addresses in the last validator set with their voting
+ // information, including vote extensions.
+ Votes []ExtendedVoteInfo `protobuf:"bytes,2,rep,name=votes,proto3" json:"votes"`
}
-func (m *Request) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
+func (m *ExtendedCommitInfo) Reset() { *m = ExtendedCommitInfo{} }
+func (m *ExtendedCommitInfo) String() string { return proto.CompactTextString(m) }
+func (*ExtendedCommitInfo) ProtoMessage() {}
+func (*ExtendedCommitInfo) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{42}
}
-
-func (m *Request) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func (m *ExtendedCommitInfo) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
}
-
-func (m *Request) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.Value != nil {
- {
- size := m.Value.Size()
- i -= size
- if _, err := m.Value.MarshalTo(dAtA[i:]); err != nil {
- return 0, err
- }
+func (m *ExtendedCommitInfo) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_ExtendedCommitInfo.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
}
+ return b[:n], nil
}
- return len(dAtA) - i, nil
+}
+func (m *ExtendedCommitInfo) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ExtendedCommitInfo.Merge(m, src)
+}
+func (m *ExtendedCommitInfo) XXX_Size() int {
+ return m.Size()
+}
+func (m *ExtendedCommitInfo) XXX_DiscardUnknown() {
+ xxx_messageInfo_ExtendedCommitInfo.DiscardUnknown(m)
}
-func (m *Request_Echo) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+var xxx_messageInfo_ExtendedCommitInfo proto.InternalMessageInfo
+
+func (m *ExtendedCommitInfo) GetRound() int32 {
+ if m != nil {
+ return m.Round
+ }
+ return 0
}
-func (m *Request_Echo) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.Echo != nil {
- {
- size, err := m.Echo.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0xa
+func (m *ExtendedCommitInfo) GetVotes() []ExtendedVoteInfo {
+ if m != nil {
+ return m.Votes
}
- return len(dAtA) - i, nil
+ return nil
}
-func (m *Request_Flush) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+
+// Event allows application developers to attach additional information to
+// ResponseBeginBlock, ResponseEndBlock, ResponseCheckTx and ResponseDeliverTx.
+// Later, transactions may be queried using these events.
+type Event struct {
+ Type string `protobuf:"bytes,1,opt,name=type,proto3" json:"type,omitempty"`
+ Attributes []EventAttribute `protobuf:"bytes,2,rep,name=attributes,proto3" json:"attributes,omitempty"`
}
-func (m *Request_Flush) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.Flush != nil {
- {
- size, err := m.Flush.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
+func (m *Event) Reset() { *m = Event{} }
+func (m *Event) String() string { return proto.CompactTextString(m) }
+func (*Event) ProtoMessage() {}
+func (*Event) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{43}
+}
+func (m *Event) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *Event) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_Event.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
}
- i--
- dAtA[i] = 0x12
+ return b[:n], nil
}
- return len(dAtA) - i, nil
}
-func (m *Request_Info) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func (m *Event) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Event.Merge(m, src)
+}
+func (m *Event) XXX_Size() int {
+ return m.Size()
+}
+func (m *Event) XXX_DiscardUnknown() {
+ xxx_messageInfo_Event.DiscardUnknown(m)
}
-func (m *Request_Info) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.Info != nil {
- {
- size, err := m.Info.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x1a
+var xxx_messageInfo_Event proto.InternalMessageInfo
+
+func (m *Event) GetType() string {
+ if m != nil {
+ return m.Type
}
- return len(dAtA) - i, nil
+ return ""
}
-func (m *Request_InitChain) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+
+func (m *Event) GetAttributes() []EventAttribute {
+ if m != nil {
+ return m.Attributes
+ }
+ return nil
}
-func (m *Request_InitChain) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.InitChain != nil {
- {
- size, err := m.InitChain.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
+// EventAttribute is a single key-value pair, associated with an event.
+type EventAttribute struct {
+ Key string `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"`
+ Value string `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
+ Index bool `protobuf:"varint,3,opt,name=index,proto3" json:"index,omitempty"`
+}
+
+func (m *EventAttribute) Reset() { *m = EventAttribute{} }
+func (m *EventAttribute) String() string { return proto.CompactTextString(m) }
+func (*EventAttribute) ProtoMessage() {}
+func (*EventAttribute) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{44}
+}
+func (m *EventAttribute) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *EventAttribute) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_EventAttribute.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
}
- i--
- dAtA[i] = 0x22
+ return b[:n], nil
}
- return len(dAtA) - i, nil
}
-func (m *Request_Query) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func (m *EventAttribute) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_EventAttribute.Merge(m, src)
+}
+func (m *EventAttribute) XXX_Size() int {
+ return m.Size()
+}
+func (m *EventAttribute) XXX_DiscardUnknown() {
+ xxx_messageInfo_EventAttribute.DiscardUnknown(m)
}
-func (m *Request_Query) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.Query != nil {
- {
- size, err := m.Query.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x2a
+var xxx_messageInfo_EventAttribute proto.InternalMessageInfo
+
+func (m *EventAttribute) GetKey() string {
+ if m != nil {
+ return m.Key
}
- return len(dAtA) - i, nil
+ return ""
}
-func (m *Request_BeginBlock) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+
+func (m *EventAttribute) GetValue() string {
+ if m != nil {
+ return m.Value
+ }
+ return ""
}
-func (m *Request_BeginBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.BeginBlock != nil {
- {
- size, err := m.BeginBlock.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x32
+func (m *EventAttribute) GetIndex() bool {
+ if m != nil {
+ return m.Index
}
- return len(dAtA) - i, nil
+ return false
}
-func (m *Request_CheckTx) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+
+// ExecTxResult contains results of executing one individual transaction.
+//
+// * Its structure is equivalent to #ResponseDeliverTx which will be deprecated/deleted
+type ExecTxResult struct {
+ Code uint32 `protobuf:"varint,1,opt,name=code,proto3" json:"code,omitempty"`
+ Data []byte `protobuf:"bytes,2,opt,name=data,proto3" json:"data,omitempty"`
+ Log string `protobuf:"bytes,3,opt,name=log,proto3" json:"log,omitempty"`
+ Info string `protobuf:"bytes,4,opt,name=info,proto3" json:"info,omitempty"`
+ GasWanted int64 `protobuf:"varint,5,opt,name=gas_wanted,json=gasWanted,proto3" json:"gas_wanted,omitempty"`
+ GasUsed int64 `protobuf:"varint,6,opt,name=gas_used,json=gasUsed,proto3" json:"gas_used,omitempty"`
+ Events []Event `protobuf:"bytes,7,rep,name=events,proto3" json:"events,omitempty"`
+ Codespace string `protobuf:"bytes,8,opt,name=codespace,proto3" json:"codespace,omitempty"`
}
-func (m *Request_CheckTx) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.CheckTx != nil {
- {
- size, err := m.CheckTx.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
+func (m *ExecTxResult) Reset() { *m = ExecTxResult{} }
+func (m *ExecTxResult) String() string { return proto.CompactTextString(m) }
+func (*ExecTxResult) ProtoMessage() {}
+func (*ExecTxResult) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{45}
+}
+func (m *ExecTxResult) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *ExecTxResult) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_ExecTxResult.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
}
- i--
- dAtA[i] = 0x3a
+ return b[:n], nil
}
- return len(dAtA) - i, nil
}
-func (m *Request_DeliverTx) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func (m *ExecTxResult) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ExecTxResult.Merge(m, src)
+}
+func (m *ExecTxResult) XXX_Size() int {
+ return m.Size()
+}
+func (m *ExecTxResult) XXX_DiscardUnknown() {
+ xxx_messageInfo_ExecTxResult.DiscardUnknown(m)
}
-func (m *Request_DeliverTx) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.DeliverTx != nil {
- {
- size, err := m.DeliverTx.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x42
+var xxx_messageInfo_ExecTxResult proto.InternalMessageInfo
+
+func (m *ExecTxResult) GetCode() uint32 {
+ if m != nil {
+ return m.Code
}
- return len(dAtA) - i, nil
+ return 0
}
-func (m *Request_EndBlock) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+
+func (m *ExecTxResult) GetData() []byte {
+ if m != nil {
+ return m.Data
+ }
+ return nil
}
-func (m *Request_EndBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.EndBlock != nil {
- {
- size, err := m.EndBlock.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x4a
+func (m *ExecTxResult) GetLog() string {
+ if m != nil {
+ return m.Log
}
- return len(dAtA) - i, nil
+ return ""
}
-func (m *Request_Commit) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+
+func (m *ExecTxResult) GetInfo() string {
+ if m != nil {
+ return m.Info
+ }
+ return ""
}
-func (m *Request_Commit) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.Commit != nil {
- {
- size, err := m.Commit.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x52
+func (m *ExecTxResult) GetGasWanted() int64 {
+ if m != nil {
+ return m.GasWanted
}
- return len(dAtA) - i, nil
-}
-func (m *Request_ListSnapshots) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+ return 0
}
-func (m *Request_ListSnapshots) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.ListSnapshots != nil {
- {
- size, err := m.ListSnapshots.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x5a
+func (m *ExecTxResult) GetGasUsed() int64 {
+ if m != nil {
+ return m.GasUsed
}
- return len(dAtA) - i, nil
-}
-func (m *Request_OfferSnapshot) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+ return 0
}
-func (m *Request_OfferSnapshot) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.OfferSnapshot != nil {
- {
- size, err := m.OfferSnapshot.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x62
+func (m *ExecTxResult) GetEvents() []Event {
+ if m != nil {
+ return m.Events
}
- return len(dAtA) - i, nil
-}
-func (m *Request_LoadSnapshotChunk) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+ return nil
}
-func (m *Request_LoadSnapshotChunk) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.LoadSnapshotChunk != nil {
- {
- size, err := m.LoadSnapshotChunk.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x6a
+func (m *ExecTxResult) GetCodespace() string {
+ if m != nil {
+ return m.Codespace
}
- return len(dAtA) - i, nil
+ return ""
}
-func (m *Request_ApplySnapshotChunk) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+
+// TxResult contains results of executing the transaction.
+//
+// One usage is indexing transaction results.
+type TxResult struct {
+ Height int64 `protobuf:"varint,1,opt,name=height,proto3" json:"height,omitempty"`
+ Index uint32 `protobuf:"varint,2,opt,name=index,proto3" json:"index,omitempty"`
+ Tx []byte `protobuf:"bytes,3,opt,name=tx,proto3" json:"tx,omitempty"`
+ Result ExecTxResult `protobuf:"bytes,4,opt,name=result,proto3" json:"result"`
}
-func (m *Request_ApplySnapshotChunk) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.ApplySnapshotChunk != nil {
- {
- size, err := m.ApplySnapshotChunk.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
+func (m *TxResult) Reset() { *m = TxResult{} }
+func (m *TxResult) String() string { return proto.CompactTextString(m) }
+func (*TxResult) ProtoMessage() {}
+func (*TxResult) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{46}
+}
+func (m *TxResult) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *TxResult) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_TxResult.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
}
- i--
- dAtA[i] = 0x72
+ return b[:n], nil
}
- return len(dAtA) - i, nil
}
-func (m *RequestEcho) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
+func (m *TxResult) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_TxResult.Merge(m, src)
}
-
-func (m *RequestEcho) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func (m *TxResult) XXX_Size() int {
+ return m.Size()
}
-
-func (m *RequestEcho) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if len(m.Message) > 0 {
- i -= len(m.Message)
- copy(dAtA[i:], m.Message)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Message)))
- i--
- dAtA[i] = 0xa
- }
- return len(dAtA) - i, nil
+func (m *TxResult) XXX_DiscardUnknown() {
+ xxx_messageInfo_TxResult.DiscardUnknown(m)
}
-func (m *RequestFlush) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
+var xxx_messageInfo_TxResult proto.InternalMessageInfo
+
+func (m *TxResult) GetHeight() int64 {
+ if m != nil {
+ return m.Height
}
- return dAtA[:n], nil
+ return 0
}
-func (m *RequestFlush) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func (m *TxResult) GetIndex() uint32 {
+ if m != nil {
+ return m.Index
+ }
+ return 0
}
-func (m *RequestFlush) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- return len(dAtA) - i, nil
+func (m *TxResult) GetTx() []byte {
+ if m != nil {
+ return m.Tx
+ }
+ return nil
}
-func (m *RequestInfo) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
+func (m *TxResult) GetResult() ExecTxResult {
+ if m != nil {
+ return m.Result
}
- return dAtA[:n], nil
+ return ExecTxResult{}
}
-func (m *RequestInfo) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+type TxRecord struct {
+ Action TxRecord_TxAction `protobuf:"varint,1,opt,name=action,proto3,enum=tendermint.abci.TxRecord_TxAction" json:"action,omitempty"`
+ Tx []byte `protobuf:"bytes,2,opt,name=tx,proto3" json:"tx,omitempty"`
}
-func (m *RequestInfo) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if len(m.AbciVersion) > 0 {
- i -= len(m.AbciVersion)
- copy(dAtA[i:], m.AbciVersion)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.AbciVersion)))
- i--
- dAtA[i] = 0x22
- }
- if m.P2PVersion != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.P2PVersion))
- i--
- dAtA[i] = 0x18
- }
- if m.BlockVersion != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.BlockVersion))
- i--
- dAtA[i] = 0x10
- }
- if len(m.Version) > 0 {
- i -= len(m.Version)
- copy(dAtA[i:], m.Version)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Version)))
- i--
- dAtA[i] = 0xa
+func (m *TxRecord) Reset() { *m = TxRecord{} }
+func (m *TxRecord) String() string { return proto.CompactTextString(m) }
+func (*TxRecord) ProtoMessage() {}
+func (*TxRecord) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{47}
+}
+func (m *TxRecord) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *TxRecord) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_TxRecord.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
}
- return len(dAtA) - i, nil
+}
+func (m *TxRecord) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_TxRecord.Merge(m, src)
+}
+func (m *TxRecord) XXX_Size() int {
+ return m.Size()
+}
+func (m *TxRecord) XXX_DiscardUnknown() {
+ xxx_messageInfo_TxRecord.DiscardUnknown(m)
}
-func (m *RequestInitChain) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
+var xxx_messageInfo_TxRecord proto.InternalMessageInfo
+
+func (m *TxRecord) GetAction() TxRecord_TxAction {
+ if m != nil {
+ return m.Action
}
- return dAtA[:n], nil
+ return TxRecord_UNKNOWN
}
-func (m *RequestInitChain) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func (m *TxRecord) GetTx() []byte {
+ if m != nil {
+ return m.Tx
+ }
+ return nil
}
-func (m *RequestInitChain) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.InitialCoreHeight != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.InitialCoreHeight))
- i--
- dAtA[i] = 0x38
- }
- if m.InitialHeight != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.InitialHeight))
- i--
- dAtA[i] = 0x30
- }
- if len(m.AppStateBytes) > 0 {
- i -= len(m.AppStateBytes)
- copy(dAtA[i:], m.AppStateBytes)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.AppStateBytes)))
- i--
- dAtA[i] = 0x2a
- }
- if m.ValidatorSet != nil {
- {
- size, err := m.ValidatorSet.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x22
- }
- if m.ConsensusParams != nil {
- {
- size, err := m.ConsensusParams.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x1a
- }
- if len(m.ChainId) > 0 {
- i -= len(m.ChainId)
- copy(dAtA[i:], m.ChainId)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.ChainId)))
- i--
- dAtA[i] = 0x12
- }
- n17, err17 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Time, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Time):])
- if err17 != nil {
- return 0, err17
- }
- i -= n17
- i = encodeVarintTypes(dAtA, i, uint64(n17))
- i--
- dAtA[i] = 0xa
- return len(dAtA) - i, nil
+// Validator
+type Validator struct {
+ // bytes address = 1; // The first 20 bytes of SHA256(public key)
+ // PubKey pub_key = 2 [(gogoproto.nullable)=false];
+ Power int64 `protobuf:"varint,3,opt,name=power,proto3" json:"power,omitempty"`
+ ProTxHash []byte `protobuf:"bytes,4,opt,name=pro_tx_hash,json=proTxHash,proto3" json:"pro_tx_hash,omitempty"`
}
-func (m *RequestQuery) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
+func (m *Validator) Reset() { *m = Validator{} }
+func (m *Validator) String() string { return proto.CompactTextString(m) }
+func (*Validator) ProtoMessage() {}
+func (*Validator) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{48}
}
-
-func (m *RequestQuery) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func (m *Validator) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
}
-
-func (m *RequestQuery) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.Prove {
- i--
- if m.Prove {
- dAtA[i] = 1
- } else {
- dAtA[i] = 0
+func (m *Validator) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_Validator.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
}
- i--
- dAtA[i] = 0x20
- }
- if m.Height != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Height))
- i--
- dAtA[i] = 0x18
- }
- if len(m.Path) > 0 {
- i -= len(m.Path)
- copy(dAtA[i:], m.Path)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Path)))
- i--
- dAtA[i] = 0x12
+ return b[:n], nil
}
- if len(m.Data) > 0 {
- i -= len(m.Data)
- copy(dAtA[i:], m.Data)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Data)))
- i--
- dAtA[i] = 0xa
+}
+func (m *Validator) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Validator.Merge(m, src)
+}
+func (m *Validator) XXX_Size() int {
+ return m.Size()
+}
+func (m *Validator) XXX_DiscardUnknown() {
+ xxx_messageInfo_Validator.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_Validator proto.InternalMessageInfo
+
+func (m *Validator) GetPower() int64 {
+ if m != nil {
+ return m.Power
}
- return len(dAtA) - i, nil
+ return 0
}
-func (m *RequestBeginBlock) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
+func (m *Validator) GetProTxHash() []byte {
+ if m != nil {
+ return m.ProTxHash
}
- return dAtA[:n], nil
+ return nil
}
-func (m *RequestBeginBlock) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+// ValidatorUpdate
+type ValidatorUpdate struct {
+ PubKey *crypto.PublicKey `protobuf:"bytes,1,opt,name=pub_key,json=pubKey,proto3" json:"pub_key,omitempty"`
+ Power int64 `protobuf:"varint,2,opt,name=power,proto3" json:"power,omitempty"`
+ ProTxHash []byte `protobuf:"bytes,3,opt,name=pro_tx_hash,json=proTxHash,proto3" json:"pro_tx_hash,omitempty"`
+ NodeAddress string `protobuf:"bytes,4,opt,name=node_address,json=nodeAddress,proto3" json:"node_address,omitempty"`
}
-func (m *RequestBeginBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if len(m.ByzantineValidators) > 0 {
- for iNdEx := len(m.ByzantineValidators) - 1; iNdEx >= 0; iNdEx-- {
- {
- size, err := m.ByzantineValidators[iNdEx].MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x22
- }
- }
- {
- size, err := m.LastCommitInfo.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x1a
- {
- size, err := m.Header.MarshalToSizedBuffer(dAtA[:i])
+func (m *ValidatorUpdate) Reset() { *m = ValidatorUpdate{} }
+func (m *ValidatorUpdate) String() string { return proto.CompactTextString(m) }
+func (*ValidatorUpdate) ProtoMessage() {}
+func (*ValidatorUpdate) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{49}
+}
+func (m *ValidatorUpdate) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *ValidatorUpdate) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_ValidatorUpdate.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
if err != nil {
- return 0, err
+ return nil, err
}
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
+ return b[:n], nil
}
- i--
- dAtA[i] = 0x12
- if len(m.Hash) > 0 {
- i -= len(m.Hash)
- copy(dAtA[i:], m.Hash)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Hash)))
- i--
- dAtA[i] = 0xa
+}
+func (m *ValidatorUpdate) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ValidatorUpdate.Merge(m, src)
+}
+func (m *ValidatorUpdate) XXX_Size() int {
+ return m.Size()
+}
+func (m *ValidatorUpdate) XXX_DiscardUnknown() {
+ xxx_messageInfo_ValidatorUpdate.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_ValidatorUpdate proto.InternalMessageInfo
+
+func (m *ValidatorUpdate) GetPubKey() *crypto.PublicKey {
+ if m != nil {
+ return m.PubKey
}
- return len(dAtA) - i, nil
+ return nil
}
-func (m *RequestCheckTx) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
+func (m *ValidatorUpdate) GetPower() int64 {
+ if m != nil {
+ return m.Power
}
- return dAtA[:n], nil
+ return 0
}
-func (m *RequestCheckTx) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func (m *ValidatorUpdate) GetProTxHash() []byte {
+ if m != nil {
+ return m.ProTxHash
+ }
+ return nil
}
-func (m *RequestCheckTx) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.Type != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Type))
- i--
- dAtA[i] = 0x10
- }
- if len(m.Tx) > 0 {
- i -= len(m.Tx)
- copy(dAtA[i:], m.Tx)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Tx)))
- i--
- dAtA[i] = 0xa
+func (m *ValidatorUpdate) GetNodeAddress() string {
+ if m != nil {
+ return m.NodeAddress
}
- return len(dAtA) - i, nil
+ return ""
}
-func (m *RequestDeliverTx) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
+type ValidatorSetUpdate struct {
+ ValidatorUpdates []ValidatorUpdate `protobuf:"bytes,1,rep,name=validator_updates,json=validatorUpdates,proto3" json:"validator_updates"`
+ ThresholdPublicKey crypto.PublicKey `protobuf:"bytes,2,opt,name=threshold_public_key,json=thresholdPublicKey,proto3" json:"threshold_public_key"`
+ QuorumHash []byte `protobuf:"bytes,3,opt,name=quorum_hash,json=quorumHash,proto3" json:"quorum_hash,omitempty"`
}
-func (m *RequestDeliverTx) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func (m *ValidatorSetUpdate) Reset() { *m = ValidatorSetUpdate{} }
+func (m *ValidatorSetUpdate) String() string { return proto.CompactTextString(m) }
+func (*ValidatorSetUpdate) ProtoMessage() {}
+func (*ValidatorSetUpdate) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{50}
}
-
-func (m *RequestDeliverTx) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if len(m.Tx) > 0 {
- i -= len(m.Tx)
- copy(dAtA[i:], m.Tx)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Tx)))
- i--
- dAtA[i] = 0xa
- }
- return len(dAtA) - i, nil
+func (m *ValidatorSetUpdate) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
}
-
-func (m *RequestEndBlock) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
+func (m *ValidatorSetUpdate) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_ValidatorSetUpdate.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
}
- return dAtA[:n], nil
}
-
-func (m *RequestEndBlock) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func (m *ValidatorSetUpdate) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ValidatorSetUpdate.Merge(m, src)
+}
+func (m *ValidatorSetUpdate) XXX_Size() int {
+ return m.Size()
+}
+func (m *ValidatorSetUpdate) XXX_DiscardUnknown() {
+ xxx_messageInfo_ValidatorSetUpdate.DiscardUnknown(m)
}
-func (m *RequestEndBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.Height != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Height))
- i--
- dAtA[i] = 0x8
+var xxx_messageInfo_ValidatorSetUpdate proto.InternalMessageInfo
+
+func (m *ValidatorSetUpdate) GetValidatorUpdates() []ValidatorUpdate {
+ if m != nil {
+ return m.ValidatorUpdates
}
- return len(dAtA) - i, nil
+ return nil
}
-func (m *RequestCommit) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
+func (m *ValidatorSetUpdate) GetThresholdPublicKey() crypto.PublicKey {
+ if m != nil {
+ return m.ThresholdPublicKey
}
- return dAtA[:n], nil
+ return crypto.PublicKey{}
}
-func (m *RequestCommit) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func (m *ValidatorSetUpdate) GetQuorumHash() []byte {
+ if m != nil {
+ return m.QuorumHash
+ }
+ return nil
}
-func (m *RequestCommit) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- return len(dAtA) - i, nil
+type ThresholdPublicKeyUpdate struct {
+ ThresholdPublicKey crypto.PublicKey `protobuf:"bytes,1,opt,name=threshold_public_key,json=thresholdPublicKey,proto3" json:"threshold_public_key"`
}
-func (m *RequestListSnapshots) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
+func (m *ThresholdPublicKeyUpdate) Reset() { *m = ThresholdPublicKeyUpdate{} }
+func (m *ThresholdPublicKeyUpdate) String() string { return proto.CompactTextString(m) }
+func (*ThresholdPublicKeyUpdate) ProtoMessage() {}
+func (*ThresholdPublicKeyUpdate) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{51}
+}
+func (m *ThresholdPublicKeyUpdate) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *ThresholdPublicKeyUpdate) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_ThresholdPublicKeyUpdate.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
}
- return dAtA[:n], nil
}
-
-func (m *RequestListSnapshots) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func (m *ThresholdPublicKeyUpdate) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ThresholdPublicKeyUpdate.Merge(m, src)
}
-
-func (m *RequestListSnapshots) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- return len(dAtA) - i, nil
+func (m *ThresholdPublicKeyUpdate) XXX_Size() int {
+ return m.Size()
+}
+func (m *ThresholdPublicKeyUpdate) XXX_DiscardUnknown() {
+ xxx_messageInfo_ThresholdPublicKeyUpdate.DiscardUnknown(m)
}
-func (m *RequestOfferSnapshot) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
+var xxx_messageInfo_ThresholdPublicKeyUpdate proto.InternalMessageInfo
+
+func (m *ThresholdPublicKeyUpdate) GetThresholdPublicKey() crypto.PublicKey {
+ if m != nil {
+ return m.ThresholdPublicKey
}
- return dAtA[:n], nil
+ return crypto.PublicKey{}
}
-func (m *RequestOfferSnapshot) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+type QuorumHashUpdate struct {
+ QuorumHash []byte `protobuf:"bytes,1,opt,name=quorum_hash,json=quorumHash,proto3" json:"quorum_hash,omitempty"`
}
-func (m *RequestOfferSnapshot) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if len(m.AppHash) > 0 {
- i -= len(m.AppHash)
- copy(dAtA[i:], m.AppHash)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.AppHash)))
- i--
- dAtA[i] = 0x12
- }
- if m.Snapshot != nil {
- {
- size, err := m.Snapshot.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
+func (m *QuorumHashUpdate) Reset() { *m = QuorumHashUpdate{} }
+func (m *QuorumHashUpdate) String() string { return proto.CompactTextString(m) }
+func (*QuorumHashUpdate) ProtoMessage() {}
+func (*QuorumHashUpdate) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{52}
+}
+func (m *QuorumHashUpdate) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *QuorumHashUpdate) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_QuorumHashUpdate.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
}
- i--
- dAtA[i] = 0xa
+ return b[:n], nil
}
- return len(dAtA) - i, nil
+}
+func (m *QuorumHashUpdate) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_QuorumHashUpdate.Merge(m, src)
+}
+func (m *QuorumHashUpdate) XXX_Size() int {
+ return m.Size()
+}
+func (m *QuorumHashUpdate) XXX_DiscardUnknown() {
+ xxx_messageInfo_QuorumHashUpdate.DiscardUnknown(m)
}
-func (m *RequestLoadSnapshotChunk) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
+var xxx_messageInfo_QuorumHashUpdate proto.InternalMessageInfo
+
+func (m *QuorumHashUpdate) GetQuorumHash() []byte {
+ if m != nil {
+ return m.QuorumHash
}
- return dAtA[:n], nil
+ return nil
}
-func (m *RequestLoadSnapshotChunk) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+// VoteInfo
+type VoteInfo struct {
+ Validator Validator `protobuf:"bytes,1,opt,name=validator,proto3" json:"validator"`
+ SignedLastBlock bool `protobuf:"varint,2,opt,name=signed_last_block,json=signedLastBlock,proto3" json:"signed_last_block,omitempty"`
}
-func (m *RequestLoadSnapshotChunk) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.Chunk != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Chunk))
- i--
- dAtA[i] = 0x18
- }
- if m.Format != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Format))
- i--
- dAtA[i] = 0x10
+func (m *VoteInfo) Reset() { *m = VoteInfo{} }
+func (m *VoteInfo) String() string { return proto.CompactTextString(m) }
+func (*VoteInfo) ProtoMessage() {}
+func (*VoteInfo) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{53}
+}
+func (m *VoteInfo) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *VoteInfo) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_VoteInfo.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
}
- if m.Height != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Height))
- i--
- dAtA[i] = 0x8
+}
+func (m *VoteInfo) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_VoteInfo.Merge(m, src)
+}
+func (m *VoteInfo) XXX_Size() int {
+ return m.Size()
+}
+func (m *VoteInfo) XXX_DiscardUnknown() {
+ xxx_messageInfo_VoteInfo.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_VoteInfo proto.InternalMessageInfo
+
+func (m *VoteInfo) GetValidator() Validator {
+ if m != nil {
+ return m.Validator
}
- return len(dAtA) - i, nil
+ return Validator{}
}
-func (m *RequestApplySnapshotChunk) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
+func (m *VoteInfo) GetSignedLastBlock() bool {
+ if m != nil {
+ return m.SignedLastBlock
}
- return dAtA[:n], nil
+ return false
}
-func (m *RequestApplySnapshotChunk) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+// ExtendedVoteInfo
+type ExtendedVoteInfo struct {
+ // The validator that sent the vote.
+ Validator Validator `protobuf:"bytes,1,opt,name=validator,proto3" json:"validator"`
+ // Indicates whether the validator signed the last block, allowing for rewards based on validator availability.
+ SignedLastBlock bool `protobuf:"varint,2,opt,name=signed_last_block,json=signedLastBlock,proto3" json:"signed_last_block,omitempty"`
+ // Non-deterministic extension provided by the sending validator's application.
+ VoteExtension []byte `protobuf:"bytes,3,opt,name=vote_extension,json=voteExtension,proto3" json:"vote_extension,omitempty"`
}
-func (m *RequestApplySnapshotChunk) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if len(m.Sender) > 0 {
- i -= len(m.Sender)
- copy(dAtA[i:], m.Sender)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Sender)))
- i--
- dAtA[i] = 0x1a
- }
- if len(m.Chunk) > 0 {
- i -= len(m.Chunk)
- copy(dAtA[i:], m.Chunk)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Chunk)))
- i--
- dAtA[i] = 0x12
- }
- if m.Index != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Index))
- i--
- dAtA[i] = 0x8
+func (m *ExtendedVoteInfo) Reset() { *m = ExtendedVoteInfo{} }
+func (m *ExtendedVoteInfo) String() string { return proto.CompactTextString(m) }
+func (*ExtendedVoteInfo) ProtoMessage() {}
+func (*ExtendedVoteInfo) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{54}
+}
+func (m *ExtendedVoteInfo) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *ExtendedVoteInfo) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_ExtendedVoteInfo.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
}
- return len(dAtA) - i, nil
+}
+func (m *ExtendedVoteInfo) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ExtendedVoteInfo.Merge(m, src)
+}
+func (m *ExtendedVoteInfo) XXX_Size() int {
+ return m.Size()
+}
+func (m *ExtendedVoteInfo) XXX_DiscardUnknown() {
+ xxx_messageInfo_ExtendedVoteInfo.DiscardUnknown(m)
}
-func (m *Response) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
+var xxx_messageInfo_ExtendedVoteInfo proto.InternalMessageInfo
+
+func (m *ExtendedVoteInfo) GetValidator() Validator {
+ if m != nil {
+ return m.Validator
}
- return dAtA[:n], nil
+ return Validator{}
}
-func (m *Response) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func (m *ExtendedVoteInfo) GetSignedLastBlock() bool {
+ if m != nil {
+ return m.SignedLastBlock
+ }
+ return false
}
-func (m *Response) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.Value != nil {
- {
- size := m.Value.Size()
- i -= size
- if _, err := m.Value.MarshalTo(dAtA[i:]); err != nil {
- return 0, err
- }
- }
+func (m *ExtendedVoteInfo) GetVoteExtension() []byte {
+ if m != nil {
+ return m.VoteExtension
}
- return len(dAtA) - i, nil
+ return nil
}
-func (m *Response_Exception) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+type Misbehavior struct {
+ Type MisbehaviorType `protobuf:"varint,1,opt,name=type,proto3,enum=tendermint.abci.MisbehaviorType" json:"type,omitempty"`
+ // The offending validator
+ Validator Validator `protobuf:"bytes,2,opt,name=validator,proto3" json:"validator"`
+ // The height when the offense occurred
+ Height int64 `protobuf:"varint,3,opt,name=height,proto3" json:"height,omitempty"`
+ // The corresponding time where the offense occurred
+ Time time.Time `protobuf:"bytes,4,opt,name=time,proto3,stdtime" json:"time"`
+ // Total voting power of the validator set in case the ABCI application does
+ // not store historical validators.
+ // https://github.com/tendermint/tendermint/issues/4581
+ TotalVotingPower int64 `protobuf:"varint,5,opt,name=total_voting_power,json=totalVotingPower,proto3" json:"total_voting_power,omitempty"`
}
-func (m *Response_Exception) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.Exception != nil {
- {
- size, err := m.Exception.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
+func (m *Misbehavior) Reset() { *m = Misbehavior{} }
+func (m *Misbehavior) String() string { return proto.CompactTextString(m) }
+func (*Misbehavior) ProtoMessage() {}
+func (*Misbehavior) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{55}
+}
+func (m *Misbehavior) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *Misbehavior) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_Misbehavior.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
}
- i--
- dAtA[i] = 0xa
+ return b[:n], nil
}
- return len(dAtA) - i, nil
}
-func (m *Response_Echo) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func (m *Misbehavior) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Misbehavior.Merge(m, src)
+}
+func (m *Misbehavior) XXX_Size() int {
+ return m.Size()
+}
+func (m *Misbehavior) XXX_DiscardUnknown() {
+ xxx_messageInfo_Misbehavior.DiscardUnknown(m)
}
-func (m *Response_Echo) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.Echo != nil {
- {
- size, err := m.Echo.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x12
+var xxx_messageInfo_Misbehavior proto.InternalMessageInfo
+
+func (m *Misbehavior) GetType() MisbehaviorType {
+ if m != nil {
+ return m.Type
}
- return len(dAtA) - i, nil
-}
-func (m *Response_Flush) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+ return MisbehaviorType_UNKNOWN
}
-func (m *Response_Flush) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.Flush != nil {
- {
- size, err := m.Flush.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x1a
+func (m *Misbehavior) GetValidator() Validator {
+ if m != nil {
+ return m.Validator
}
- return len(dAtA) - i, nil
+ return Validator{}
}
-func (m *Response_Info) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+
+func (m *Misbehavior) GetHeight() int64 {
+ if m != nil {
+ return m.Height
+ }
+ return 0
}
-func (m *Response_Info) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.Info != nil {
- {
- size, err := m.Info.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x22
+func (m *Misbehavior) GetTime() time.Time {
+ if m != nil {
+ return m.Time
}
- return len(dAtA) - i, nil
+ return time.Time{}
}
-func (m *Response_InitChain) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+
+func (m *Misbehavior) GetTotalVotingPower() int64 {
+ if m != nil {
+ return m.TotalVotingPower
+ }
+ return 0
}
-func (m *Response_InitChain) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.InitChain != nil {
- {
- size, err := m.InitChain.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x2a
- }
- return len(dAtA) - i, nil
-}
-func (m *Response_Query) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+type Snapshot struct {
+ Height uint64 `protobuf:"varint,1,opt,name=height,proto3" json:"height,omitempty"`
+ Format uint32 `protobuf:"varint,2,opt,name=format,proto3" json:"format,omitempty"`
+ Chunks uint32 `protobuf:"varint,3,opt,name=chunks,proto3" json:"chunks,omitempty"`
+ Hash []byte `protobuf:"bytes,4,opt,name=hash,proto3" json:"hash,omitempty"`
+ Metadata []byte `protobuf:"bytes,5,opt,name=metadata,proto3" json:"metadata,omitempty"`
+ CoreChainLockedHeight uint32 `protobuf:"varint,100,opt,name=core_chain_locked_height,json=coreChainLockedHeight,proto3" json:"core_chain_locked_height,omitempty"`
}
-func (m *Response_Query) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.Query != nil {
- {
- size, err := m.Query.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x32
- }
- return len(dAtA) - i, nil
+func (m *Snapshot) Reset() { *m = Snapshot{} }
+func (m *Snapshot) String() string { return proto.CompactTextString(m) }
+func (*Snapshot) ProtoMessage() {}
+func (*Snapshot) Descriptor() ([]byte, []int) {
+ return fileDescriptor_252557cfdd89a31a, []int{56}
}
-func (m *Response_BeginBlock) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func (m *Snapshot) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
}
-
-func (m *Response_BeginBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.BeginBlock != nil {
- {
- size, err := m.BeginBlock.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
+func (m *Snapshot) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_Snapshot.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
}
- i--
- dAtA[i] = 0x3a
+ return b[:n], nil
}
- return len(dAtA) - i, nil
}
-func (m *Response_CheckTx) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func (m *Snapshot) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Snapshot.Merge(m, src)
}
-
-func (m *Response_CheckTx) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.CheckTx != nil {
- {
- size, err := m.CheckTx.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x42
- }
- return len(dAtA) - i, nil
+func (m *Snapshot) XXX_Size() int {
+ return m.Size()
}
-func (m *Response_DeliverTx) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func (m *Snapshot) XXX_DiscardUnknown() {
+ xxx_messageInfo_Snapshot.DiscardUnknown(m)
}
-func (m *Response_DeliverTx) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.DeliverTx != nil {
- {
- size, err := m.DeliverTx.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x4a
- }
- return len(dAtA) - i, nil
-}
-func (m *Response_EndBlock) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
-}
+var xxx_messageInfo_Snapshot proto.InternalMessageInfo
-func (m *Response_EndBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.EndBlock != nil {
- {
- size, err := m.EndBlock.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x52
+func (m *Snapshot) GetHeight() uint64 {
+ if m != nil {
+ return m.Height
}
- return len(dAtA) - i, nil
-}
-func (m *Response_Commit) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+ return 0
}
-func (m *Response_Commit) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.Commit != nil {
- {
- size, err := m.Commit.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x5a
+func (m *Snapshot) GetFormat() uint32 {
+ if m != nil {
+ return m.Format
}
- return len(dAtA) - i, nil
-}
-func (m *Response_ListSnapshots) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+ return 0
}
-func (m *Response_ListSnapshots) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.ListSnapshots != nil {
- {
- size, err := m.ListSnapshots.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x62
+func (m *Snapshot) GetChunks() uint32 {
+ if m != nil {
+ return m.Chunks
}
- return len(dAtA) - i, nil
-}
-func (m *Response_OfferSnapshot) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+ return 0
}
-func (m *Response_OfferSnapshot) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.OfferSnapshot != nil {
- {
- size, err := m.OfferSnapshot.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x6a
+func (m *Snapshot) GetHash() []byte {
+ if m != nil {
+ return m.Hash
}
- return len(dAtA) - i, nil
-}
-func (m *Response_LoadSnapshotChunk) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+ return nil
}
-func (m *Response_LoadSnapshotChunk) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.LoadSnapshotChunk != nil {
- {
- size, err := m.LoadSnapshotChunk.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x72
+func (m *Snapshot) GetMetadata() []byte {
+ if m != nil {
+ return m.Metadata
}
- return len(dAtA) - i, nil
-}
-func (m *Response_ApplySnapshotChunk) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+ return nil
}
-func (m *Response_ApplySnapshotChunk) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- if m.ApplySnapshotChunk != nil {
- {
- size, err := m.ApplySnapshotChunk.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x7a
+func (m *Snapshot) GetCoreChainLockedHeight() uint32 {
+ if m != nil {
+ return m.CoreChainLockedHeight
}
- return len(dAtA) - i, nil
+ return 0
}
-func (m *ResponseException) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
+
+func init() {
+ proto.RegisterEnum("tendermint.abci.CheckTxType", CheckTxType_name, CheckTxType_value)
+ proto.RegisterEnum("tendermint.abci.MisbehaviorType", MisbehaviorType_name, MisbehaviorType_value)
+ proto.RegisterEnum("tendermint.abci.ResponseOfferSnapshot_Result", ResponseOfferSnapshot_Result_name, ResponseOfferSnapshot_Result_value)
+ proto.RegisterEnum("tendermint.abci.ResponseApplySnapshotChunk_Result", ResponseApplySnapshotChunk_Result_name, ResponseApplySnapshotChunk_Result_value)
+ proto.RegisterEnum("tendermint.abci.ResponseProcessProposal_ProposalStatus", ResponseProcessProposal_ProposalStatus_name, ResponseProcessProposal_ProposalStatus_value)
+ proto.RegisterEnum("tendermint.abci.ResponseVerifyVoteExtension_VerifyStatus", ResponseVerifyVoteExtension_VerifyStatus_name, ResponseVerifyVoteExtension_VerifyStatus_value)
+ proto.RegisterEnum("tendermint.abci.TxRecord_TxAction", TxRecord_TxAction_name, TxRecord_TxAction_value)
+ proto.RegisterType((*Request)(nil), "tendermint.abci.Request")
+ proto.RegisterType((*RequestEcho)(nil), "tendermint.abci.RequestEcho")
+ proto.RegisterType((*RequestFlush)(nil), "tendermint.abci.RequestFlush")
+ proto.RegisterType((*RequestInfo)(nil), "tendermint.abci.RequestInfo")
+ proto.RegisterType((*RequestInitChain)(nil), "tendermint.abci.RequestInitChain")
+ proto.RegisterType((*RequestQuery)(nil), "tendermint.abci.RequestQuery")
+ proto.RegisterType((*RequestBeginBlock)(nil), "tendermint.abci.RequestBeginBlock")
+ proto.RegisterType((*RequestCheckTx)(nil), "tendermint.abci.RequestCheckTx")
+ proto.RegisterType((*RequestDeliverTx)(nil), "tendermint.abci.RequestDeliverTx")
+ proto.RegisterType((*RequestEndBlock)(nil), "tendermint.abci.RequestEndBlock")
+ proto.RegisterType((*RequestCommit)(nil), "tendermint.abci.RequestCommit")
+ proto.RegisterType((*RequestListSnapshots)(nil), "tendermint.abci.RequestListSnapshots")
+ proto.RegisterType((*RequestOfferSnapshot)(nil), "tendermint.abci.RequestOfferSnapshot")
+ proto.RegisterType((*RequestLoadSnapshotChunk)(nil), "tendermint.abci.RequestLoadSnapshotChunk")
+ proto.RegisterType((*RequestApplySnapshotChunk)(nil), "tendermint.abci.RequestApplySnapshotChunk")
+ proto.RegisterType((*RequestPrepareProposal)(nil), "tendermint.abci.RequestPrepareProposal")
+ proto.RegisterType((*RequestProcessProposal)(nil), "tendermint.abci.RequestProcessProposal")
+ proto.RegisterType((*RequestExtendVote)(nil), "tendermint.abci.RequestExtendVote")
+ proto.RegisterType((*RequestVerifyVoteExtension)(nil), "tendermint.abci.RequestVerifyVoteExtension")
+ proto.RegisterType((*RequestFinalizeBlock)(nil), "tendermint.abci.RequestFinalizeBlock")
+ proto.RegisterType((*Response)(nil), "tendermint.abci.Response")
+ proto.RegisterType((*ResponseException)(nil), "tendermint.abci.ResponseException")
+ proto.RegisterType((*ResponseEcho)(nil), "tendermint.abci.ResponseEcho")
+ proto.RegisterType((*ResponseFlush)(nil), "tendermint.abci.ResponseFlush")
+ proto.RegisterType((*ResponseInfo)(nil), "tendermint.abci.ResponseInfo")
+ proto.RegisterType((*ResponseInitChain)(nil), "tendermint.abci.ResponseInitChain")
+ proto.RegisterType((*ResponseQuery)(nil), "tendermint.abci.ResponseQuery")
+ proto.RegisterType((*ResponseBeginBlock)(nil), "tendermint.abci.ResponseBeginBlock")
+ proto.RegisterType((*ResponseCheckTx)(nil), "tendermint.abci.ResponseCheckTx")
+ proto.RegisterType((*ResponseDeliverTx)(nil), "tendermint.abci.ResponseDeliverTx")
+ proto.RegisterType((*ResponseEndBlock)(nil), "tendermint.abci.ResponseEndBlock")
+ proto.RegisterType((*ResponseCommit)(nil), "tendermint.abci.ResponseCommit")
+ proto.RegisterType((*ResponseListSnapshots)(nil), "tendermint.abci.ResponseListSnapshots")
+ proto.RegisterType((*ResponseOfferSnapshot)(nil), "tendermint.abci.ResponseOfferSnapshot")
+ proto.RegisterType((*ResponseLoadSnapshotChunk)(nil), "tendermint.abci.ResponseLoadSnapshotChunk")
+ proto.RegisterType((*ResponseApplySnapshotChunk)(nil), "tendermint.abci.ResponseApplySnapshotChunk")
+ proto.RegisterType((*ResponsePrepareProposal)(nil), "tendermint.abci.ResponsePrepareProposal")
+ proto.RegisterType((*ResponseProcessProposal)(nil), "tendermint.abci.ResponseProcessProposal")
+ proto.RegisterType((*ResponseExtendVote)(nil), "tendermint.abci.ResponseExtendVote")
+ proto.RegisterType((*ResponseVerifyVoteExtension)(nil), "tendermint.abci.ResponseVerifyVoteExtension")
+ proto.RegisterType((*ResponseFinalizeBlock)(nil), "tendermint.abci.ResponseFinalizeBlock")
+ proto.RegisterType((*CommitInfo)(nil), "tendermint.abci.CommitInfo")
+ proto.RegisterType((*ExtendedCommitInfo)(nil), "tendermint.abci.ExtendedCommitInfo")
+ proto.RegisterType((*Event)(nil), "tendermint.abci.Event")
+ proto.RegisterType((*EventAttribute)(nil), "tendermint.abci.EventAttribute")
+ proto.RegisterType((*ExecTxResult)(nil), "tendermint.abci.ExecTxResult")
+ proto.RegisterType((*TxResult)(nil), "tendermint.abci.TxResult")
+ proto.RegisterType((*TxRecord)(nil), "tendermint.abci.TxRecord")
+ proto.RegisterType((*Validator)(nil), "tendermint.abci.Validator")
+ proto.RegisterType((*ValidatorUpdate)(nil), "tendermint.abci.ValidatorUpdate")
+ proto.RegisterType((*ValidatorSetUpdate)(nil), "tendermint.abci.ValidatorSetUpdate")
+ proto.RegisterType((*ThresholdPublicKeyUpdate)(nil), "tendermint.abci.ThresholdPublicKeyUpdate")
+ proto.RegisterType((*QuorumHashUpdate)(nil), "tendermint.abci.QuorumHashUpdate")
+ proto.RegisterType((*VoteInfo)(nil), "tendermint.abci.VoteInfo")
+ proto.RegisterType((*ExtendedVoteInfo)(nil), "tendermint.abci.ExtendedVoteInfo")
+ proto.RegisterType((*Misbehavior)(nil), "tendermint.abci.Misbehavior")
+ proto.RegisterType((*Snapshot)(nil), "tendermint.abci.Snapshot")
}
-func (m *ResponseException) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func init() { proto.RegisterFile("tendermint/abci/types.proto", fileDescriptor_252557cfdd89a31a) }
+
+var fileDescriptor_252557cfdd89a31a = []byte{
+ // 3742 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0x5b, 0xcb, 0x6f, 0x1b, 0xd7,
+ 0xd5, 0xe7, 0xf0, 0x25, 0xf2, 0x50, 0x7c, 0xe8, 0x4a, 0xb6, 0x69, 0xda, 0x96, 0x94, 0x31, 0x1c,
+ 0x3b, 0x4e, 0x22, 0x25, 0xf2, 0x97, 0xc4, 0xf9, 0x92, 0x7c, 0x81, 0x44, 0xd1, 0xa1, 0x6c, 0x59,
+ 0x92, 0x47, 0x94, 0x83, 0x7c, 0xf9, 0xe2, 0xc9, 0x88, 0x73, 0x25, 0x4e, 0x4c, 0x72, 0x26, 0x33,
+ 0x43, 0x85, 0xca, 0x36, 0x5f, 0x36, 0x59, 0x65, 0x53, 0xb4, 0x40, 0x11, 0x14, 0x28, 0x5a, 0xa0,
+ 0x9b, 0xa2, 0x7f, 0x40, 0x81, 0xae, 0xb3, 0xcc, 0xaa, 0x2d, 0x0a, 0x34, 0x0d, 0x92, 0x4d, 0xd1,
+ 0x6d, 0x81, 0x76, 0xd7, 0x16, 0xf7, 0x31, 0x4f, 0x72, 0xf8, 0x88, 0xd3, 0x00, 0x41, 0xb3, 0x9b,
+ 0x7b, 0xee, 0x39, 0x67, 0xee, 0xe3, 0xdc, 0x73, 0xce, 0xfd, 0xdd, 0x7b, 0xe1, 0x82, 0x8d, 0xbb,
+ 0x2a, 0x36, 0x3b, 0x5a, 0xd7, 0x5e, 0x55, 0x0e, 0x9b, 0xda, 0xaa, 0x7d, 0x6a, 0x60, 0x6b, 0xc5,
+ 0x30, 0x75, 0x5b, 0x47, 0x45, 0xaf, 0x72, 0x85, 0x54, 0x56, 0x2e, 0xf9, 0xb8, 0x9b, 0xe6, 0xa9,
+ 0x61, 0xeb, 0xab, 0x86, 0xa9, 0xeb, 0x47, 0x8c, 0xbf, 0x72, 0xd1, 0x57, 0x4d, 0xf5, 0xf8, 0xb5,
+ 0x05, 0x6a, 0xb9, 0xf0, 0x43, 0x7c, 0xea, 0xd4, 0x5e, 0x1a, 0x90, 0x35, 0x14, 0x53, 0xe9, 0x38,
+ 0xd5, 0x4b, 0xc7, 0xba, 0x7e, 0xdc, 0xc6, 0xab, 0xb4, 0x74, 0xd8, 0x3b, 0x5a, 0xb5, 0xb5, 0x0e,
+ 0xb6, 0x6c, 0xa5, 0x63, 0x70, 0x86, 0x85, 0x63, 0xfd, 0x58, 0xa7, 0x9f, 0xab, 0xe4, 0x8b, 0x51,
+ 0xc5, 0x7f, 0x02, 0xcc, 0x48, 0xf8, 0xdd, 0x1e, 0xb6, 0x6c, 0xb4, 0x06, 0x49, 0xdc, 0x6c, 0xe9,
+ 0x65, 0x61, 0x59, 0xb8, 0x96, 0x5b, 0xbb, 0xb8, 0x12, 0xea, 0xdc, 0x0a, 0xe7, 0xab, 0x35, 0x5b,
+ 0x7a, 0x3d, 0x26, 0x51, 0x5e, 0xf4, 0x1c, 0xa4, 0x8e, 0xda, 0x3d, 0xab, 0x55, 0x8e, 0x53, 0xa1,
+ 0x4b, 0x51, 0x42, 0xb7, 0x08, 0x53, 0x3d, 0x26, 0x31, 0x6e, 0xf2, 0x2b, 0xad, 0x7b, 0xa4, 0x97,
+ 0x13, 0xa3, 0x7f, 0xb5, 0xd5, 0x3d, 0xa2, 0xbf, 0x22, 0xbc, 0x68, 0x03, 0x40, 0xeb, 0x6a, 0xb6,
+ 0xdc, 0x6c, 0x29, 0x5a, 0xb7, 0x9c, 0xa4, 0x92, 0x8f, 0x45, 0x4b, 0x6a, 0x76, 0x95, 0x30, 0xd6,
+ 0x63, 0x52, 0x56, 0x73, 0x0a, 0xa4, 0xb9, 0xef, 0xf6, 0xb0, 0x79, 0x5a, 0x4e, 0x8d, 0x6e, 0xee,
+ 0x3d, 0xc2, 0x44, 0x9a, 0x4b, 0xb9, 0xd1, 0x16, 0xe4, 0x0e, 0xf1, 0xb1, 0xd6, 0x95, 0x0f, 0xdb,
+ 0x7a, 0xf3, 0x61, 0x39, 0x4d, 0x85, 0xc5, 0x28, 0xe1, 0x0d, 0xc2, 0xba, 0x41, 0x38, 0x37, 0xe2,
+ 0x65, 0xa1, 0x1e, 0x93, 0xe0, 0xd0, 0xa5, 0xa0, 0x97, 0x21, 0xd3, 0x6c, 0xe1, 0xe6, 0x43, 0xd9,
+ 0xee, 0x97, 0x67, 0xa8, 0x9e, 0xa5, 0x28, 0x3d, 0x55, 0xc2, 0xd7, 0xe8, 0xd7, 0x63, 0xd2, 0x4c,
+ 0x93, 0x7d, 0xa2, 0x5b, 0x00, 0x2a, 0x6e, 0x6b, 0x27, 0xd8, 0x24, 0xf2, 0x99, 0xd1, 0x63, 0xb0,
+ 0xc9, 0x38, 0x1b, 0x7d, 0xde, 0x8c, 0xac, 0xea, 0x10, 0x50, 0x15, 0xb2, 0xb8, 0xab, 0xf2, 0xee,
+ 0x64, 0xa9, 0x9a, 0xe5, 0xc8, 0xf9, 0xee, 0xaa, 0xfe, 0xce, 0x64, 0x30, 0x2f, 0xa3, 0x9b, 0x90,
+ 0x6e, 0xea, 0x9d, 0x8e, 0x66, 0x97, 0x81, 0x6a, 0x58, 0x8c, 0xec, 0x08, 0xe5, 0xaa, 0xc7, 0x24,
+ 0xce, 0x8f, 0x76, 0xa0, 0xd0, 0xd6, 0x2c, 0x5b, 0xb6, 0xba, 0x8a, 0x61, 0xb5, 0x74, 0xdb, 0x2a,
+ 0xe7, 0xa8, 0x86, 0x2b, 0x51, 0x1a, 0xb6, 0x35, 0xcb, 0xde, 0x77, 0x98, 0xeb, 0x31, 0x29, 0xdf,
+ 0xf6, 0x13, 0x88, 0x3e, 0xfd, 0xe8, 0x08, 0x9b, 0xae, 0xc2, 0xf2, 0xec, 0x68, 0x7d, 0xbb, 0x84,
+ 0xdb, 0x91, 0x27, 0xfa, 0x74, 0x3f, 0x01, 0xbd, 0x09, 0xf3, 0x6d, 0x5d, 0x51, 0x5d, 0x75, 0x72,
+ 0xb3, 0xd5, 0xeb, 0x3e, 0x2c, 0xe7, 0xa9, 0xd2, 0x27, 0x22, 0x1b, 0xa9, 0x2b, 0xaa, 0xa3, 0xa2,
+ 0x4a, 0x04, 0xea, 0x31, 0x69, 0xae, 0x1d, 0x26, 0xa2, 0x07, 0xb0, 0xa0, 0x18, 0x46, 0xfb, 0x34,
+ 0xac, 0xbd, 0x40, 0xb5, 0x5f, 0x8f, 0xd2, 0xbe, 0x4e, 0x64, 0xc2, 0xea, 0x91, 0x32, 0x40, 0x45,
+ 0x0d, 0x28, 0x19, 0x26, 0x36, 0x14, 0x13, 0xcb, 0x86, 0xa9, 0x1b, 0xba, 0xa5, 0xb4, 0xcb, 0x45,
+ 0xaa, 0xfb, 0x6a, 0x94, 0xee, 0x3d, 0xc6, 0xbf, 0xc7, 0xd9, 0xeb, 0x31, 0xa9, 0x68, 0x04, 0x49,
+ 0x4c, 0xab, 0xde, 0xc4, 0x96, 0xe5, 0x69, 0x2d, 0x8d, 0xd3, 0x4a, 0xf9, 0x83, 0x5a, 0x03, 0x24,
+ 0x54, 0x83, 0x1c, 0xee, 0x13, 0x71, 0xf9, 0x44, 0xb7, 0x71, 0x79, 0x6e, 0xf4, 0xc2, 0xaa, 0x51,
+ 0xd6, 0xfb, 0xba, 0x8d, 0xc9, 0xa2, 0xc2, 0x6e, 0x09, 0x29, 0x70, 0xe6, 0x04, 0x9b, 0xda, 0xd1,
+ 0x29, 0x55, 0x23, 0xd3, 0x1a, 0x4b, 0xd3, 0xbb, 0x65, 0x44, 0x15, 0x3e, 0x19, 0xa5, 0xf0, 0x3e,
+ 0x15, 0x22, 0x2a, 0x6a, 0x8e, 0x48, 0x3d, 0x26, 0xcd, 0x9f, 0x0c, 0x92, 0x89, 0x89, 0x1d, 0x69,
+ 0x5d, 0xa5, 0xad, 0xbd, 0x8f, 0xf9, 0xb2, 0x99, 0x1f, 0x6d, 0x62, 0xb7, 0x38, 0x37, 0x5d, 0x2b,
+ 0xc4, 0xc4, 0x8e, 0xfc, 0x84, 0x8d, 0x19, 0x48, 0x9d, 0x28, 0xed, 0x1e, 0x16, 0xaf, 0x42, 0xce,
+ 0xe7, 0x58, 0x51, 0x19, 0x66, 0x3a, 0xd8, 0xb2, 0x94, 0x63, 0x4c, 0xfd, 0x70, 0x56, 0x72, 0x8a,
+ 0x62, 0x01, 0x66, 0xfd, 0xce, 0x54, 0xfc, 0x58, 0x70, 0x25, 0x89, 0x9f, 0x24, 0x92, 0x27, 0xd8,
+ 0xa4, 0xdd, 0xe6, 0x92, 0xbc, 0x88, 0x2e, 0x43, 0x9e, 0x36, 0x59, 0x76, 0xea, 0x89, 0xb3, 0x4e,
+ 0x4a, 0xb3, 0x94, 0x78, 0x9f, 0x33, 0x2d, 0x41, 0xce, 0x58, 0x33, 0x5c, 0x96, 0x04, 0x65, 0x01,
+ 0x63, 0xcd, 0x70, 0x18, 0x1e, 0x83, 0x59, 0xd2, 0x3f, 0x97, 0x23, 0x49, 0x7f, 0x92, 0x23, 0x34,
+ 0xce, 0x22, 0xfe, 0x7f, 0x02, 0x4a, 0x61, 0x07, 0x8c, 0x6e, 0x42, 0x92, 0xc4, 0x22, 0x1e, 0x56,
+ 0x2a, 0x2b, 0x2c, 0x50, 0xad, 0x38, 0x81, 0x6a, 0xa5, 0xe1, 0x04, 0xaa, 0x8d, 0xcc, 0xa7, 0x9f,
+ 0x2f, 0xc5, 0x3e, 0xfe, 0xd3, 0x92, 0x20, 0x51, 0x09, 0x74, 0x9e, 0xf8, 0x4a, 0x45, 0xeb, 0xca,
+ 0x9a, 0x4a, 0x9b, 0x9c, 0x25, 0x8e, 0x50, 0xd1, 0xba, 0x5b, 0x2a, 0xda, 0x86, 0x52, 0x53, 0xef,
+ 0x5a, 0xb8, 0x6b, 0xf5, 0x2c, 0x99, 0x05, 0x42, 0x1e, 0x4c, 0x02, 0xee, 0x90, 0x85, 0xd7, 0xaa,
+ 0xc3, 0xb9, 0x47, 0x19, 0xa5, 0x62, 0x33, 0x48, 0x40, 0x3b, 0x90, 0x3f, 0x51, 0xda, 0x9a, 0xaa,
+ 0xd8, 0xba, 0x29, 0x5b, 0xd8, 0xe6, 0xd1, 0xe5, 0xf2, 0xc0, 0xdc, 0xde, 0x77, 0xb8, 0xf6, 0xb1,
+ 0x7d, 0x60, 0xa8, 0x8a, 0x8d, 0x37, 0x92, 0x9f, 0x7e, 0xbe, 0x24, 0x48, 0xb3, 0x27, 0xbe, 0x1a,
+ 0xf4, 0x38, 0x14, 0x15, 0xc3, 0x90, 0x2d, 0x5b, 0xb1, 0xb1, 0x7c, 0x78, 0x6a, 0x63, 0x8b, 0x06,
+ 0x9c, 0x59, 0x29, 0xaf, 0x18, 0xc6, 0x3e, 0xa1, 0x6e, 0x10, 0x22, 0xba, 0x02, 0x05, 0x12, 0x9b,
+ 0x34, 0xa5, 0x2d, 0xb7, 0xb0, 0x76, 0xdc, 0xb2, 0x69, 0x68, 0x49, 0x48, 0x79, 0x4e, 0xad, 0x53,
+ 0x22, 0x5a, 0x81, 0x79, 0x87, 0xad, 0xa9, 0x9b, 0xd8, 0xe1, 0x25, 0xe1, 0x23, 0x2f, 0xcd, 0xf1,
+ 0xaa, 0xaa, 0x6e, 0x62, 0xc6, 0x2f, 0xaa, 0xae, 0xa5, 0xd0, 0x38, 0x86, 0x10, 0x24, 0x55, 0xc5,
+ 0x56, 0xe8, 0x0c, 0xcc, 0x4a, 0xf4, 0x9b, 0xd0, 0x0c, 0xc5, 0x6e, 0xf1, 0x71, 0xa5, 0xdf, 0xe8,
+ 0x2c, 0xa4, 0xb9, 0xea, 0x04, 0x6d, 0x06, 0x2f, 0xa1, 0x05, 0x48, 0x19, 0xa6, 0x7e, 0x82, 0xe9,
+ 0xb0, 0x64, 0x24, 0x56, 0x10, 0x3f, 0x88, 0xc3, 0xdc, 0x40, 0xc4, 0x23, 0x7a, 0x5b, 0x8a, 0xd5,
+ 0x72, 0xfe, 0x45, 0xbe, 0xd1, 0xf3, 0x44, 0xaf, 0xa2, 0x62, 0x93, 0x67, 0x09, 0xe5, 0xc1, 0x29,
+ 0xaa, 0xd3, 0x7a, 0x3a, 0x98, 0x31, 0x89, 0x73, 0xa3, 0x3b, 0x50, 0x6a, 0x2b, 0x96, 0x2d, 0xb3,
+ 0xa8, 0x21, 0xfb, 0x32, 0x86, 0x0b, 0x03, 0x33, 0xc3, 0x62, 0x0c, 0x59, 0x08, 0x5c, 0x49, 0x81,
+ 0x88, 0x7a, 0x54, 0x74, 0x00, 0x0b, 0x87, 0xa7, 0xef, 0x2b, 0x5d, 0x5b, 0xeb, 0x62, 0xd9, 0x9d,
+ 0x2d, 0xab, 0x9c, 0x5c, 0x4e, 0x0c, 0x4d, 0x41, 0xee, 0x6a, 0xd6, 0x21, 0x6e, 0x29, 0x27, 0x9a,
+ 0xee, 0x34, 0x6b, 0xde, 0x95, 0x77, 0xcd, 0xc0, 0x12, 0x25, 0x28, 0x04, 0xc3, 0x35, 0x2a, 0x40,
+ 0xdc, 0xee, 0xf3, 0xfe, 0xc7, 0xed, 0x3e, 0x7a, 0x06, 0x92, 0xa4, 0x8f, 0xb4, 0xef, 0x85, 0x21,
+ 0x3f, 0xe2, 0x72, 0x8d, 0x53, 0x03, 0x4b, 0x94, 0x53, 0x14, 0xdd, 0x55, 0xe4, 0x86, 0xf0, 0xb0,
+ 0x56, 0xf1, 0x09, 0x28, 0x86, 0xe2, 0xb3, 0x6f, 0xfa, 0x04, 0xff, 0xf4, 0x89, 0x45, 0xc8, 0x07,
+ 0x02, 0xb1, 0x78, 0x16, 0x16, 0x86, 0xc5, 0x55, 0xb1, 0xe5, 0xd2, 0x03, 0xf1, 0x11, 0x3d, 0x07,
+ 0x19, 0x37, 0xb0, 0xb2, 0x55, 0x7c, 0x7e, 0xa0, 0x17, 0x0e, 0xb3, 0xe4, 0xb2, 0x92, 0xe5, 0x4b,
+ 0x56, 0x01, 0x35, 0x87, 0x38, 0x6d, 0xf8, 0x8c, 0x62, 0x18, 0x75, 0xc5, 0x6a, 0x89, 0x6f, 0x43,
+ 0x39, 0x2a, 0x68, 0x86, 0xba, 0x91, 0x74, 0xad, 0xf0, 0x2c, 0xa4, 0x8f, 0x74, 0xb3, 0xa3, 0xd8,
+ 0x54, 0x59, 0x5e, 0xe2, 0x25, 0x62, 0x9d, 0x2c, 0x80, 0x26, 0x28, 0x99, 0x15, 0x44, 0x19, 0xce,
+ 0x47, 0x06, 0x4e, 0x22, 0xa2, 0x75, 0x55, 0xcc, 0xc6, 0x33, 0x2f, 0xb1, 0x82, 0xa7, 0x88, 0x35,
+ 0x96, 0x15, 0xc8, 0x6f, 0x2d, 0xda, 0x57, 0xaa, 0x3f, 0x2b, 0xf1, 0x92, 0xf8, 0xab, 0x04, 0x9c,
+ 0x1d, 0x1e, 0x3e, 0xd1, 0x32, 0xcc, 0x76, 0x94, 0xbe, 0x6c, 0xf7, 0xf9, 0xda, 0x67, 0xd3, 0x01,
+ 0x1d, 0xa5, 0xdf, 0xe8, 0xb3, 0x85, 0x5f, 0x82, 0x84, 0xdd, 0xb7, 0xca, 0xf1, 0xe5, 0xc4, 0xb5,
+ 0x59, 0x89, 0x7c, 0xa2, 0x03, 0x98, 0x6b, 0xeb, 0x4d, 0xa5, 0x2d, 0xfb, 0x2c, 0x9e, 0x1b, 0xfb,
+ 0xa0, 0x1b, 0x62, 0x81, 0x10, 0xab, 0x03, 0x46, 0x5f, 0xa4, 0x3a, 0xb6, 0x5d, 0xcb, 0xff, 0x37,
+ 0x59, 0xbd, 0x6f, 0x8e, 0x52, 0x01, 0x4f, 0xe1, 0xf8, 0xfa, 0xf4, 0xd4, 0xbe, 0xfe, 0x19, 0x58,
+ 0xe8, 0xe2, 0xbe, 0xed, 0x6b, 0x23, 0x33, 0x9c, 0x19, 0x3a, 0x17, 0x88, 0xd4, 0x79, 0xff, 0x27,
+ 0x36, 0x84, 0x56, 0x61, 0x81, 0x65, 0x22, 0xd8, 0x24, 0x29, 0x09, 0x19, 0x6e, 0x2a, 0x91, 0xa1,
+ 0x12, 0x73, 0x4e, 0xdd, 0x9e, 0xa9, 0x37, 0xfa, 0xd4, 0xe8, 0x7e, 0xe2, 0x9f, 0xb1, 0x60, 0x1e,
+ 0xc2, 0xe7, 0x43, 0xf0, 0xe6, 0x63, 0xdf, 0xd5, 0xae, 0x06, 0xa6, 0x24, 0x3e, 0xa9, 0xff, 0x41,
+ 0x8e, 0xf8, 0x04, 0xb3, 0x91, 0x78, 0xb4, 0xd9, 0x70, 0x7c, 0x6e, 0xd2, 0xe7, 0x73, 0xbf, 0x93,
+ 0x33, 0xf4, 0xaa, 0x1b, 0x51, 0xbc, 0x54, 0x6f, 0x68, 0x44, 0xf1, 0x7a, 0x17, 0x0f, 0xb8, 0xba,
+ 0x9f, 0x0a, 0x50, 0x89, 0xce, 0xed, 0x86, 0xaa, 0x7a, 0x16, 0xce, 0x78, 0xb1, 0xdf, 0xdf, 0x4a,
+ 0xe6, 0x05, 0x90, 0x5b, 0xe9, 0x36, 0x33, 0x32, 0x4e, 0x5e, 0x81, 0x42, 0x28, 0xff, 0x64, 0x33,
+ 0x92, 0x3f, 0xf1, 0xb7, 0x42, 0xfc, 0x71, 0xc2, 0xf5, 0xb3, 0x81, 0x24, 0x71, 0x88, 0x15, 0xde,
+ 0x83, 0x79, 0x15, 0x37, 0x35, 0xf5, 0xeb, 0x1a, 0xe1, 0x1c, 0x97, 0xfe, 0xde, 0x06, 0x27, 0xb6,
+ 0xc1, 0xdf, 0xe5, 0x20, 0x23, 0x61, 0xcb, 0x20, 0x29, 0x22, 0xda, 0x80, 0x2c, 0xee, 0x37, 0xb1,
+ 0x61, 0x3b, 0x59, 0xf5, 0xf0, 0xdd, 0x09, 0xe3, 0xae, 0x39, 0x9c, 0x64, 0xaf, 0xed, 0x8a, 0xa1,
+ 0x1b, 0x1c, 0x56, 0x89, 0x46, 0x48, 0xb8, 0xb8, 0x1f, 0x57, 0x79, 0xde, 0xc1, 0x55, 0x12, 0x91,
+ 0x5b, 0x6b, 0x26, 0x15, 0x02, 0x56, 0x6e, 0x70, 0x60, 0x25, 0x39, 0xe6, 0x67, 0x01, 0x64, 0xa5,
+ 0x1a, 0x40, 0x56, 0x52, 0x63, 0xba, 0x19, 0x01, 0xad, 0x3c, 0xef, 0x40, 0x2b, 0xe9, 0x31, 0x2d,
+ 0x0e, 0x61, 0x2b, 0xb7, 0x83, 0xd8, 0xca, 0x4c, 0x44, 0xc8, 0x73, 0xa4, 0x47, 0x82, 0x2b, 0xaf,
+ 0xf8, 0xc0, 0x95, 0x4c, 0x24, 0xaa, 0xc1, 0x14, 0x0d, 0x41, 0x57, 0x5e, 0x0b, 0xa0, 0x2b, 0xd9,
+ 0x31, 0xe3, 0x30, 0x02, 0x5e, 0xd9, 0xf4, 0xc3, 0x2b, 0x10, 0x89, 0xd2, 0xf0, 0x79, 0x8f, 0xc2,
+ 0x57, 0x5e, 0x74, 0xf1, 0x95, 0x5c, 0x24, 0x50, 0xc4, 0xfb, 0x12, 0x06, 0x58, 0x76, 0x07, 0x00,
+ 0x16, 0x06, 0x88, 0x3c, 0x1e, 0xa9, 0x62, 0x0c, 0xc2, 0xb2, 0x3b, 0x80, 0xb0, 0xe4, 0xc7, 0x28,
+ 0x1c, 0x03, 0xb1, 0xfc, 0xdf, 0x70, 0x88, 0x25, 0x1a, 0x04, 0xe1, 0xcd, 0x9c, 0x0c, 0x63, 0x91,
+ 0x23, 0x30, 0x96, 0x62, 0x24, 0x1e, 0xc0, 0xd4, 0x4f, 0x0c, 0xb2, 0x1c, 0x0c, 0x01, 0x59, 0x18,
+ 0x1c, 0x72, 0x2d, 0x52, 0xf9, 0x04, 0x28, 0xcb, 0xc1, 0x10, 0x94, 0x65, 0x6e, 0xac, 0xda, 0xb1,
+ 0x30, 0xcb, 0xad, 0x20, 0xcc, 0x82, 0xc6, 0xac, 0xb1, 0x48, 0x9c, 0xe5, 0x30, 0x0a, 0x67, 0x61,
+ 0x58, 0xc8, 0x53, 0x91, 0x1a, 0xa7, 0x00, 0x5a, 0x76, 0x07, 0x80, 0x96, 0x85, 0x31, 0x96, 0x36,
+ 0x29, 0xd2, 0xf2, 0x04, 0xc9, 0x2e, 0x42, 0xae, 0x9a, 0x24, 0xfd, 0xd8, 0x34, 0x75, 0x93, 0x63,
+ 0x26, 0xac, 0x20, 0x5e, 0x23, 0x3b, 0x68, 0xcf, 0x2d, 0x8f, 0x40, 0x65, 0xe8, 0xe6, 0xca, 0xe7,
+ 0x8a, 0xc5, 0x3f, 0x0a, 0x9e, 0x2c, 0xdd, 0x78, 0xfa, 0x77, 0xdf, 0x59, 0xbe, 0xfb, 0xf6, 0x61,
+ 0x35, 0xf1, 0x20, 0x56, 0xb3, 0x04, 0x39, 0xb2, 0x69, 0x0a, 0xc1, 0x30, 0x8a, 0xe1, 0xc2, 0x30,
+ 0xd7, 0x61, 0x8e, 0xa6, 0x02, 0x0c, 0xd1, 0xe1, 0xf1, 0x35, 0x49, 0xe3, 0x6b, 0x91, 0x54, 0xb0,
+ 0x51, 0x60, 0x81, 0xf6, 0x69, 0x98, 0xf7, 0xf1, 0xba, 0x9b, 0x31, 0x86, 0x45, 0x94, 0x5c, 0xee,
+ 0x75, 0xb6, 0x2b, 0xbb, 0x9d, 0xcc, 0xa8, 0x25, 0x2c, 0x5d, 0xe2, 0x99, 0x86, 0x89, 0x59, 0x40,
+ 0x90, 0x09, 0x0b, 0x56, 0xf9, 0xaf, 0xc4, 0x3f, 0xc7, 0xbd, 0x61, 0xf4, 0x40, 0x9e, 0x61, 0x78,
+ 0x8c, 0xf0, 0xb5, 0xf1, 0x18, 0xff, 0xce, 0x31, 0x11, 0xd8, 0x39, 0xa2, 0x37, 0x61, 0x21, 0x00,
+ 0xd5, 0xc8, 0x3d, 0x0a, 0xc3, 0x94, 0xd5, 0xe9, 0x10, 0x9b, 0x98, 0x2f, 0xb1, 0x73, 0x6b, 0xd0,
+ 0x5b, 0x70, 0x81, 0xa6, 0x17, 0xa1, 0xce, 0x3b, 0xff, 0xc0, 0x83, 0x6e, 0xd8, 0xe9, 0x90, 0x89,
+ 0xe9, 0x38, 0x6c, 0xeb, 0xcd, 0x87, 0xd2, 0x39, 0xa2, 0x23, 0x40, 0xe2, 0xea, 0x23, 0x70, 0x9c,
+ 0xa3, 0x28, 0x1c, 0xe7, 0xef, 0x82, 0x67, 0x5c, 0x2e, 0x92, 0xd3, 0xd4, 0x55, 0xcc, 0xf7, 0xad,
+ 0xf4, 0x9b, 0x64, 0x8d, 0x6d, 0xfd, 0x98, 0xef, 0x4e, 0xc9, 0x27, 0xe1, 0x72, 0x93, 0x80, 0x2c,
+ 0x8f, 0xf1, 0xee, 0x96, 0x97, 0xa5, 0x62, 0x7c, 0xcb, 0x5b, 0x82, 0xc4, 0x43, 0xcc, 0x42, 0xf6,
+ 0xac, 0x44, 0x3e, 0x09, 0x1f, 0x5d, 0x2d, 0x3c, 0xa5, 0x62, 0x05, 0x74, 0x13, 0xb2, 0xf4, 0x1c,
+ 0x4b, 0xd6, 0x0d, 0x8b, 0x47, 0xd6, 0x40, 0xf6, 0xc9, 0x8e, 0xab, 0x56, 0xf6, 0x08, 0xcf, 0xae,
+ 0x61, 0x49, 0x19, 0x83, 0x7f, 0xf9, 0x72, 0xc0, 0x6c, 0x20, 0x07, 0xbc, 0x08, 0x59, 0xd2, 0x7a,
+ 0xcb, 0x50, 0x9a, 0x98, 0x86, 0xc8, 0xac, 0xe4, 0x11, 0xc4, 0x07, 0x80, 0x06, 0x03, 0x3e, 0xaa,
+ 0x43, 0x1a, 0x9f, 0xe0, 0xae, 0xcd, 0x52, 0xe4, 0xdc, 0xda, 0xd9, 0xc1, 0x8d, 0x31, 0xa9, 0xde,
+ 0x28, 0x93, 0x09, 0xfe, 0xcb, 0xe7, 0x4b, 0x25, 0xc6, 0xfd, 0x94, 0xde, 0xd1, 0x6c, 0xdc, 0x31,
+ 0xec, 0x53, 0x89, 0xcb, 0x8b, 0x7f, 0x88, 0x43, 0x31, 0x94, 0x08, 0x0c, 0x1d, 0x5b, 0x67, 0xed,
+ 0xc6, 0x7d, 0xc8, 0xd9, 0x64, 0xe3, 0x7d, 0x09, 0xe0, 0x58, 0xb1, 0xe4, 0xf7, 0x94, 0xae, 0x8d,
+ 0x55, 0x3e, 0xe8, 0xd9, 0x63, 0xc5, 0x7a, 0x9d, 0x12, 0x88, 0x85, 0x93, 0xea, 0x9e, 0x85, 0x55,
+ 0x8e, 0xf9, 0xcd, 0x1c, 0x2b, 0xd6, 0x81, 0x85, 0x55, 0x5f, 0x2f, 0x67, 0x1e, 0xad, 0x97, 0xc1,
+ 0x31, 0xce, 0x84, 0xc6, 0xd8, 0x07, 0x6c, 0x64, 0xfd, 0xc0, 0x06, 0xaa, 0x40, 0xc6, 0x30, 0x35,
+ 0xdd, 0xd4, 0xec, 0x53, 0x3a, 0x31, 0x09, 0xc9, 0x2d, 0xa3, 0xcb, 0x90, 0xef, 0xe0, 0x8e, 0xa1,
+ 0xeb, 0x6d, 0x99, 0x79, 0xcd, 0x1c, 0x15, 0x9d, 0xe5, 0xc4, 0x1a, 0x75, 0x9e, 0x1f, 0xfa, 0x3c,
+ 0x84, 0x07, 0x60, 0x7d, 0xb3, 0xc3, 0xbb, 0x38, 0x64, 0x78, 0x7d, 0x14, 0xd2, 0x89, 0xd0, 0xf8,
+ 0xba, 0xe5, 0x6f, 0x6b, 0x80, 0xc5, 0xbf, 0xc6, 0xa1, 0x14, 0x4e, 0xf2, 0xd0, 0x1b, 0x70, 0x2e,
+ 0xe4, 0x28, 0xb9, 0x77, 0xb1, 0xf8, 0x06, 0x61, 0x02, 0x7f, 0x79, 0x26, 0xe8, 0x2f, 0x99, 0x77,
+ 0xb1, 0x7c, 0xfd, 0x4a, 0x3c, 0x62, 0xbf, 0xc6, 0xf8, 0x41, 0xf5, 0x11, 0xfd, 0x60, 0x94, 0x0f,
+ 0xc7, 0xd3, 0xa2, 0xee, 0x43, 0x7c, 0xb8, 0xb8, 0x05, 0x85, 0x60, 0x5a, 0x3c, 0xd4, 0xca, 0x2e,
+ 0x43, 0xde, 0xc4, 0x36, 0xe9, 0x58, 0x60, 0x27, 0x3f, 0xcb, 0x88, 0xdc, 0xff, 0xee, 0xc1, 0x99,
+ 0xa1, 0xe9, 0x31, 0x7a, 0x01, 0xb2, 0x5e, 0x66, 0xcd, 0x7c, 0xd1, 0x08, 0x44, 0xd4, 0xe3, 0x15,
+ 0x7f, 0x23, 0x78, 0x2a, 0x83, 0x18, 0x6b, 0x0d, 0xd2, 0x26, 0xb6, 0x7a, 0x6d, 0x86, 0x7a, 0x16,
+ 0xd6, 0x9e, 0x9e, 0x2c, 0xb1, 0x26, 0xd4, 0x5e, 0xdb, 0x96, 0xb8, 0xb0, 0xf8, 0x00, 0xd2, 0x8c,
+ 0x82, 0x72, 0x30, 0x73, 0xb0, 0x73, 0x67, 0x67, 0xf7, 0xf5, 0x9d, 0x52, 0x0c, 0x01, 0xa4, 0xd7,
+ 0xab, 0xd5, 0xda, 0x5e, 0xa3, 0x24, 0xa0, 0x2c, 0xa4, 0xd6, 0x37, 0x76, 0xa5, 0x46, 0x29, 0x4e,
+ 0xc8, 0x52, 0xed, 0x76, 0xad, 0xda, 0x28, 0x25, 0xd0, 0x1c, 0xe4, 0xd9, 0xb7, 0x7c, 0x6b, 0x57,
+ 0xba, 0xbb, 0xde, 0x28, 0x25, 0x7d, 0xa4, 0xfd, 0xda, 0xce, 0x66, 0x4d, 0x2a, 0xa5, 0xc4, 0x67,
+ 0xe1, 0x7c, 0x64, 0x2a, 0xee, 0x01, 0xa8, 0x82, 0x0f, 0x40, 0x15, 0x7f, 0x14, 0x87, 0x4a, 0x74,
+ 0x7e, 0x8d, 0x6e, 0x87, 0x3a, 0xbe, 0x36, 0x45, 0x72, 0x1e, 0xea, 0x3d, 0xba, 0x02, 0x05, 0x13,
+ 0x1f, 0x61, 0xbb, 0xd9, 0x62, 0xf9, 0x3e, 0x43, 0x58, 0xf3, 0x52, 0x9e, 0x53, 0xa9, 0x90, 0xc5,
+ 0xd8, 0xde, 0xc1, 0x4d, 0x5b, 0x66, 0x2e, 0x8f, 0x2d, 0x98, 0x2c, 0x61, 0x23, 0xd4, 0x7d, 0x46,
+ 0x14, 0xdf, 0x9e, 0x6a, 0x2c, 0xb3, 0x90, 0x92, 0x6a, 0x0d, 0xe9, 0x8d, 0x52, 0x02, 0x21, 0x28,
+ 0xd0, 0x4f, 0x79, 0x7f, 0x67, 0x7d, 0x6f, 0xbf, 0xbe, 0x4b, 0xc6, 0x72, 0x1e, 0x8a, 0xce, 0x58,
+ 0x3a, 0xc4, 0x94, 0xf8, 0xdb, 0x38, 0x9c, 0x8b, 0xd8, 0x1d, 0xa0, 0x9b, 0x00, 0x76, 0x5f, 0x36,
+ 0x71, 0x53, 0x37, 0xd5, 0x68, 0x23, 0x6b, 0xf4, 0x25, 0xca, 0x21, 0x65, 0x6d, 0xfe, 0x65, 0x8d,
+ 0xc0, 0xdd, 0xd1, 0xcb, 0x5c, 0x29, 0xe9, 0x95, 0xe3, 0x26, 0x2e, 0x0d, 0x81, 0x97, 0x71, 0x93,
+ 0x28, 0xa6, 0x63, 0x4b, 0x15, 0x53, 0x7e, 0x74, 0x17, 0xe6, 0xbc, 0x75, 0xeb, 0x78, 0x2d, 0x86,
+ 0x24, 0x2f, 0x47, 0x2f, 0x5a, 0xb6, 0x2e, 0xa5, 0xd2, 0x49, 0x90, 0x60, 0x8d, 0x72, 0x85, 0xa9,
+ 0x47, 0x73, 0x85, 0xe2, 0xcf, 0x12, 0xfe, 0x81, 0x0d, 0x6e, 0x86, 0x76, 0x21, 0x6d, 0xd9, 0x8a,
+ 0xdd, 0xb3, 0xb8, 0xc1, 0xbd, 0x30, 0xe9, 0xce, 0x6a, 0xc5, 0xf9, 0xd8, 0xa7, 0xe2, 0x12, 0x57,
+ 0xf3, 0xfd, 0x78, 0x5b, 0xe2, 0x73, 0x50, 0x08, 0x0e, 0x4e, 0xf4, 0x92, 0xf1, 0x7c, 0x4e, 0x5c,
+ 0x7c, 0xc9, 0x4b, 0xf3, 0x7c, 0x80, 0xef, 0x20, 0x8c, 0x2a, 0x0c, 0x83, 0x51, 0x7f, 0x2e, 0xc0,
+ 0x85, 0x11, 0xfb, 0x4b, 0x74, 0x2f, 0x34, 0xcf, 0x2f, 0x4e, 0xb3, 0x3b, 0x5d, 0x61, 0xb4, 0xe0,
+ 0x4c, 0x8b, 0x37, 0x60, 0xd6, 0x4f, 0x9f, 0xac, 0x93, 0x7f, 0x4b, 0x78, 0x3e, 0x3f, 0x88, 0xf7,
+ 0x7e, 0x63, 0xf9, 0x6c, 0xc8, 0xce, 0xe2, 0x53, 0xda, 0xd9, 0x08, 0xc3, 0x48, 0x3e, 0x62, 0x4e,
+ 0xe2, 0x5f, 0x1b, 0xa9, 0xe0, 0xda, 0x18, 0x08, 0xc1, 0xe9, 0xc1, 0x10, 0xfc, 0x9d, 0xce, 0x44,
+ 0x7e, 0x20, 0x00, 0xf8, 0x0e, 0xa0, 0x17, 0x20, 0x65, 0xea, 0xbd, 0xae, 0x4a, 0xcd, 0x31, 0x25,
+ 0xb1, 0x02, 0xd9, 0xef, 0xbf, 0xdb, 0xd3, 0xcd, 0x5e, 0xc7, 0xbf, 0xdb, 0x05, 0x46, 0xa2, 0xc3,
+ 0x74, 0x15, 0x8a, 0x6c, 0xfb, 0x6e, 0x69, 0xc7, 0x5d, 0xc5, 0xee, 0x99, 0x98, 0x63, 0xec, 0x05,
+ 0x4a, 0xde, 0x77, 0xa8, 0x84, 0x91, 0x5d, 0x38, 0xf0, 0x18, 0xd9, 0x88, 0x17, 0x28, 0xd9, 0x65,
+ 0x14, 0x35, 0x40, 0x83, 0x07, 0x88, 0x11, 0xcd, 0x7b, 0x05, 0x52, 0x64, 0xd5, 0x39, 0x36, 0xf5,
+ 0x58, 0xe4, 0x51, 0x24, 0x59, 0x3d, 0xbe, 0x83, 0x07, 0x26, 0x25, 0xbe, 0x0f, 0x29, 0x6a, 0xc2,
+ 0x24, 0x07, 0xa3, 0x87, 0xe0, 0x1c, 0x04, 0x21, 0xdf, 0xe8, 0x2d, 0x00, 0xc5, 0xb6, 0x4d, 0xed,
+ 0xb0, 0xe7, 0xfd, 0x60, 0x69, 0xf8, 0x12, 0x58, 0x77, 0xf8, 0x36, 0x2e, 0xf2, 0xb5, 0xb0, 0xe0,
+ 0x89, 0xfa, 0xd6, 0x83, 0x4f, 0xa1, 0xb8, 0x03, 0x85, 0xa0, 0xac, 0xb3, 0xdb, 0x65, 0x6d, 0x08,
+ 0xee, 0x76, 0x19, 0x0a, 0xc3, 0x77, 0xbb, 0xee, 0x5e, 0x39, 0xc1, 0xee, 0x3b, 0xd0, 0x82, 0xf8,
+ 0x0f, 0x01, 0x66, 0xfd, 0x2b, 0xe8, 0x3f, 0x6d, 0xc3, 0x28, 0x7e, 0x28, 0x40, 0xc6, 0xed, 0x7c,
+ 0xc4, 0x65, 0x03, 0x6f, 0xec, 0xe2, 0xfe, 0xa3, 0x75, 0x76, 0x7b, 0x21, 0xe1, 0xde, 0x89, 0x78,
+ 0xc9, 0x4d, 0xfa, 0xa2, 0x0e, 0x2a, 0xfc, 0x23, 0xed, 0x5c, 0x0b, 0xe1, 0x39, 0xee, 0x0f, 0x79,
+ 0x3b, 0x48, 0xb6, 0x83, 0xfe, 0x1b, 0xd2, 0x4a, 0xd3, 0x3d, 0x9e, 0x29, 0x0c, 0xc1, 0xeb, 0x1d,
+ 0xd6, 0x95, 0x46, 0x7f, 0x9d, 0x72, 0x4a, 0x5c, 0x82, 0xb7, 0x2a, 0xee, 0xde, 0xa9, 0x78, 0x95,
+ 0xe8, 0x65, 0x3c, 0x41, 0xd7, 0x5e, 0x00, 0x38, 0xd8, 0xb9, 0xbb, 0xbb, 0xb9, 0x75, 0x6b, 0xab,
+ 0xb6, 0xc9, 0xd3, 0xbe, 0xcd, 0xcd, 0xda, 0x66, 0x29, 0x4e, 0xf8, 0xa4, 0xda, 0xdd, 0xdd, 0xfb,
+ 0xb5, 0xcd, 0x52, 0x42, 0x5c, 0x87, 0xac, 0xeb, 0x21, 0xe8, 0xad, 0x19, 0xfd, 0x3d, 0x7e, 0x6f,
+ 0x20, 0x21, 0xb1, 0x02, 0x5a, 0x84, 0x9c, 0xff, 0x18, 0x8a, 0x2d, 0xe5, 0xac, 0xe1, 0x1e, 0x3f,
+ 0xfd, 0x52, 0x80, 0x62, 0x28, 0x94, 0xa3, 0x97, 0x60, 0xc6, 0xe8, 0x1d, 0xca, 0x8e, 0xed, 0x86,
+ 0x4e, 0xe9, 0x1c, 0xec, 0xa5, 0x77, 0xd8, 0xd6, 0x9a, 0x77, 0xf0, 0x29, 0xf7, 0x48, 0x69, 0xa3,
+ 0x77, 0x78, 0x87, 0x99, 0x38, 0x6b, 0x46, 0x7c, 0x44, 0x33, 0x12, 0xa1, 0x66, 0xa0, 0xab, 0x30,
+ 0xdb, 0xd5, 0x55, 0x2c, 0x2b, 0xaa, 0x6a, 0x62, 0x8b, 0xc5, 0x81, 0x2c, 0xd7, 0x9c, 0x23, 0x35,
+ 0xeb, 0xac, 0x42, 0xfc, 0x42, 0x00, 0x34, 0xe8, 0x15, 0xd1, 0xfe, 0xb0, 0xd4, 0x45, 0x98, 0x2c,
+ 0x75, 0xe1, 0xd3, 0x3d, 0x98, 0xc0, 0x34, 0x60, 0xc1, 0x6e, 0x99, 0xd8, 0x6a, 0xe9, 0x6d, 0x55,
+ 0x36, 0x68, 0x7f, 0xe9, 0xa0, 0xc4, 0x27, 0x1c, 0x94, 0x98, 0x84, 0x5c, 0x79, 0xb7, 0x66, 0xac,
+ 0x07, 0x16, 0x0d, 0x28, 0x37, 0x06, 0xc4, 0x78, 0x3f, 0xa3, 0x9a, 0x24, 0x3c, 0x4a, 0x93, 0xc4,
+ 0x1b, 0x50, 0xba, 0xe7, 0xfe, 0x9f, 0xff, 0x29, 0xd4, 0x4c, 0x61, 0xa0, 0x99, 0x27, 0x90, 0x71,
+ 0x9c, 0x30, 0xfa, 0x1f, 0xc8, 0xba, 0xa3, 0xe7, 0x5e, 0xbc, 0x8b, 0x1c, 0x76, 0xde, 0x12, 0x4f,
+ 0x04, 0x5d, 0x87, 0x39, 0x12, 0x45, 0x9c, 0x63, 0x67, 0x86, 0xc3, 0xc7, 0xa9, 0x37, 0x2c, 0xb2,
+ 0x8a, 0x6d, 0x07, 0x3c, 0x26, 0x79, 0x58, 0x29, 0x1c, 0x05, 0xbe, 0xcd, 0x06, 0x0c, 0xc9, 0x17,
+ 0x13, 0xc3, 0xf2, 0xc5, 0x0f, 0xe2, 0x90, 0xf3, 0x1d, 0x66, 0xa3, 0xff, 0xf2, 0x85, 0xa4, 0xc2,
+ 0x10, 0xab, 0xf4, 0xf1, 0x7a, 0x77, 0xb3, 0x82, 0x1d, 0x8b, 0x4f, 0xdf, 0xb1, 0xa8, 0xbb, 0x03,
+ 0xce, 0x99, 0x78, 0x72, 0xea, 0x33, 0xf1, 0xa7, 0x00, 0xd9, 0xba, 0xad, 0xb4, 0xe5, 0x13, 0xdd,
+ 0xd6, 0xba, 0xc7, 0x32, 0x5b, 0xed, 0x2c, 0x80, 0x94, 0x68, 0xcd, 0x7d, 0x5a, 0xb1, 0x47, 0xe8,
+ 0xe2, 0xaf, 0x05, 0xc8, 0xb8, 0xa0, 0xc3, 0xb4, 0x57, 0xad, 0xce, 0x42, 0x9a, 0xef, 0xab, 0xd9,
+ 0x5d, 0x2b, 0x5e, 0x1a, 0x7a, 0xf8, 0x5f, 0x81, 0x4c, 0x07, 0xdb, 0x0a, 0x8d, 0x86, 0x2c, 0x0f,
+ 0x71, 0xcb, 0xe8, 0x05, 0x28, 0x47, 0x9d, 0x2f, 0xd0, 0x94, 0x2e, 0x4f, 0xd2, 0x49, 0x5f, 0xb6,
+ 0x86, 0x55, 0x96, 0x0e, 0x5e, 0x7f, 0x11, 0x72, 0xbe, 0xeb, 0x72, 0x24, 0xb2, 0xee, 0xd4, 0x5e,
+ 0x2f, 0xc5, 0x2a, 0x33, 0x1f, 0x7d, 0xb2, 0x9c, 0xd8, 0xc1, 0xef, 0xa1, 0x32, 0x71, 0xc7, 0xd5,
+ 0x7a, 0xad, 0x7a, 0xa7, 0x24, 0x54, 0x72, 0x1f, 0x7d, 0xb2, 0x3c, 0x23, 0x61, 0x7a, 0xbc, 0x7b,
+ 0xfd, 0x0e, 0x14, 0x43, 0x33, 0x1a, 0xf4, 0xf1, 0x08, 0x0a, 0x9b, 0x07, 0x7b, 0xdb, 0x5b, 0xd5,
+ 0xf5, 0x46, 0x4d, 0xbe, 0xbf, 0xdb, 0xa8, 0x95, 0x04, 0x74, 0x0e, 0xe6, 0xb7, 0xb7, 0x5e, 0xab,
+ 0x37, 0xe4, 0xea, 0xf6, 0x56, 0x6d, 0xa7, 0x21, 0xaf, 0x37, 0x1a, 0xeb, 0xd5, 0x3b, 0xa5, 0xf8,
+ 0xda, 0x2f, 0x72, 0x50, 0x5c, 0xdf, 0xa8, 0x6e, 0xad, 0x1b, 0x46, 0x5b, 0x6b, 0x2a, 0x34, 0x62,
+ 0x54, 0x21, 0x49, 0xcf, 0x8a, 0x46, 0x3e, 0x9c, 0xa8, 0x8c, 0x3e, 0xff, 0x47, 0xb7, 0x20, 0x45,
+ 0x8f, 0x91, 0xd0, 0xe8, 0x97, 0x14, 0x95, 0x31, 0x17, 0x02, 0x48, 0x63, 0xe8, 0x3a, 0x1c, 0xf9,
+ 0xb4, 0xa2, 0x32, 0xfa, 0x7e, 0x00, 0xda, 0x86, 0x19, 0x07, 0x1c, 0x1f, 0xf7, 0x48, 0xa1, 0x32,
+ 0xf6, 0xa0, 0x9d, 0x74, 0x8d, 0x1d, 0x62, 0x8c, 0x7e, 0x75, 0x51, 0x19, 0x73, 0x73, 0x00, 0x6d,
+ 0x41, 0x9a, 0x03, 0x7b, 0x63, 0x1e, 0x1c, 0x54, 0xc6, 0x1d, 0x98, 0x23, 0x09, 0xb2, 0xde, 0x11,
+ 0xd6, 0xf8, 0xb7, 0x24, 0x95, 0x09, 0x2e, 0x45, 0xa0, 0x07, 0x90, 0x0f, 0x82, 0x85, 0x93, 0x3d,
+ 0x6a, 0xa8, 0x4c, 0x78, 0x34, 0x4f, 0xf4, 0x07, 0x91, 0xc3, 0xc9, 0x1e, 0x39, 0x54, 0x26, 0x3c,
+ 0xa9, 0x47, 0xef, 0xc0, 0xdc, 0x20, 0xb2, 0x37, 0xf9, 0x9b, 0x87, 0xca, 0x14, 0x67, 0xf7, 0xa8,
+ 0x03, 0x68, 0x08, 0x22, 0x38, 0xc5, 0x13, 0x88, 0xca, 0x34, 0x47, 0xf9, 0x48, 0x85, 0x62, 0x18,
+ 0x65, 0x9b, 0xf4, 0x49, 0x44, 0x65, 0xe2, 0x63, 0x7d, 0xf6, 0x97, 0x20, 0xe4, 0x34, 0xe9, 0x13,
+ 0x89, 0xca, 0xc4, 0xa7, 0xfc, 0xe8, 0x00, 0xc0, 0x07, 0x99, 0x4c, 0xf0, 0x64, 0xa2, 0x32, 0xc9,
+ 0x79, 0x3f, 0x32, 0x60, 0x7e, 0x18, 0x96, 0x32, 0xcd, 0x0b, 0x8a, 0xca, 0x54, 0xd7, 0x00, 0x88,
+ 0x3d, 0x07, 0x51, 0x91, 0xc9, 0x5e, 0x54, 0x54, 0x26, 0xbc, 0x0f, 0xb0, 0x51, 0xfb, 0xf4, 0xcb,
+ 0x45, 0xe1, 0xb3, 0x2f, 0x17, 0x85, 0x2f, 0xbe, 0x5c, 0x14, 0x3e, 0xfe, 0x6a, 0x31, 0xf6, 0xd9,
+ 0x57, 0x8b, 0xb1, 0xdf, 0x7f, 0xb5, 0x18, 0xfb, 0xdf, 0x27, 0x8f, 0x35, 0xbb, 0xd5, 0x3b, 0x5c,
+ 0x69, 0xea, 0x9d, 0x55, 0xff, 0xe3, 0xba, 0x61, 0x0f, 0xfe, 0x0e, 0xd3, 0x34, 0x12, 0xdf, 0xf8,
+ 0x57, 0x00, 0x00, 0x00, 0xff, 0xff, 0x0d, 0x2a, 0x96, 0xd0, 0x10, 0x38, 0x00, 0x00,
}
-func (m *ResponseException) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if len(m.Error) > 0 {
- i -= len(m.Error)
- copy(dAtA[i:], m.Error)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Error)))
- i--
- dAtA[i] = 0xa
- }
- return len(dAtA) - i, nil
+// Reference imports to suppress errors if they are not otherwise used.
+var _ context.Context
+var _ grpc.ClientConn
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the grpc package it is being compiled against.
+const _ = grpc.SupportPackageIsVersion4
+
+// ABCIApplicationClient is the client API for ABCIApplication service.
+//
+// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
+type ABCIApplicationClient interface {
+ Echo(ctx context.Context, in *RequestEcho, opts ...grpc.CallOption) (*ResponseEcho, error)
+ Flush(ctx context.Context, in *RequestFlush, opts ...grpc.CallOption) (*ResponseFlush, error)
+ Info(ctx context.Context, in *RequestInfo, opts ...grpc.CallOption) (*ResponseInfo, error)
+ CheckTx(ctx context.Context, in *RequestCheckTx, opts ...grpc.CallOption) (*ResponseCheckTx, error)
+ Query(ctx context.Context, in *RequestQuery, opts ...grpc.CallOption) (*ResponseQuery, error)
+ Commit(ctx context.Context, in *RequestCommit, opts ...grpc.CallOption) (*ResponseCommit, error)
+ InitChain(ctx context.Context, in *RequestInitChain, opts ...grpc.CallOption) (*ResponseInitChain, error)
+ ListSnapshots(ctx context.Context, in *RequestListSnapshots, opts ...grpc.CallOption) (*ResponseListSnapshots, error)
+ OfferSnapshot(ctx context.Context, in *RequestOfferSnapshot, opts ...grpc.CallOption) (*ResponseOfferSnapshot, error)
+ LoadSnapshotChunk(ctx context.Context, in *RequestLoadSnapshotChunk, opts ...grpc.CallOption) (*ResponseLoadSnapshotChunk, error)
+ ApplySnapshotChunk(ctx context.Context, in *RequestApplySnapshotChunk, opts ...grpc.CallOption) (*ResponseApplySnapshotChunk, error)
+ PrepareProposal(ctx context.Context, in *RequestPrepareProposal, opts ...grpc.CallOption) (*ResponsePrepareProposal, error)
+ ProcessProposal(ctx context.Context, in *RequestProcessProposal, opts ...grpc.CallOption) (*ResponseProcessProposal, error)
+ ExtendVote(ctx context.Context, in *RequestExtendVote, opts ...grpc.CallOption) (*ResponseExtendVote, error)
+ VerifyVoteExtension(ctx context.Context, in *RequestVerifyVoteExtension, opts ...grpc.CallOption) (*ResponseVerifyVoteExtension, error)
+ FinalizeBlock(ctx context.Context, in *RequestFinalizeBlock, opts ...grpc.CallOption) (*ResponseFinalizeBlock, error)
}
-func (m *ResponseEcho) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
+type aBCIApplicationClient struct {
+ cc *grpc.ClientConn
+}
+
+func NewABCIApplicationClient(cc *grpc.ClientConn) ABCIApplicationClient {
+ return &aBCIApplicationClient{cc}
+}
+
+func (c *aBCIApplicationClient) Echo(ctx context.Context, in *RequestEcho, opts ...grpc.CallOption) (*ResponseEcho, error) {
+ out := new(ResponseEcho)
+ err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/Echo", in, out, opts...)
if err != nil {
return nil, err
}
- return dAtA[:n], nil
+ return out, nil
}
-func (m *ResponseEcho) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func (c *aBCIApplicationClient) Flush(ctx context.Context, in *RequestFlush, opts ...grpc.CallOption) (*ResponseFlush, error) {
+ out := new(ResponseFlush)
+ err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/Flush", in, out, opts...)
+ if err != nil {
+ return nil, err
+ }
+ return out, nil
}
-func (m *ResponseEcho) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if len(m.Message) > 0 {
- i -= len(m.Message)
- copy(dAtA[i:], m.Message)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Message)))
- i--
- dAtA[i] = 0xa
+func (c *aBCIApplicationClient) Info(ctx context.Context, in *RequestInfo, opts ...grpc.CallOption) (*ResponseInfo, error) {
+ out := new(ResponseInfo)
+ err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/Info", in, out, opts...)
+ if err != nil {
+ return nil, err
}
- return len(dAtA) - i, nil
+ return out, nil
}
-func (m *ResponseFlush) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
+func (c *aBCIApplicationClient) CheckTx(ctx context.Context, in *RequestCheckTx, opts ...grpc.CallOption) (*ResponseCheckTx, error) {
+ out := new(ResponseCheckTx)
+ err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/CheckTx", in, out, opts...)
if err != nil {
return nil, err
}
- return dAtA[:n], nil
+ return out, nil
}
-func (m *ResponseFlush) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func (c *aBCIApplicationClient) Query(ctx context.Context, in *RequestQuery, opts ...grpc.CallOption) (*ResponseQuery, error) {
+ out := new(ResponseQuery)
+ err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/Query", in, out, opts...)
+ if err != nil {
+ return nil, err
+ }
+ return out, nil
}
-func (m *ResponseFlush) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- return len(dAtA) - i, nil
+func (c *aBCIApplicationClient) Commit(ctx context.Context, in *RequestCommit, opts ...grpc.CallOption) (*ResponseCommit, error) {
+ out := new(ResponseCommit)
+ err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/Commit", in, out, opts...)
+ if err != nil {
+ return nil, err
+ }
+ return out, nil
}
-func (m *ResponseInfo) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
+func (c *aBCIApplicationClient) InitChain(ctx context.Context, in *RequestInitChain, opts ...grpc.CallOption) (*ResponseInitChain, error) {
+ out := new(ResponseInitChain)
+ err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/InitChain", in, out, opts...)
if err != nil {
return nil, err
}
- return dAtA[:n], nil
+ return out, nil
}
-func (m *ResponseInfo) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func (c *aBCIApplicationClient) ListSnapshots(ctx context.Context, in *RequestListSnapshots, opts ...grpc.CallOption) (*ResponseListSnapshots, error) {
+ out := new(ResponseListSnapshots)
+ err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/ListSnapshots", in, out, opts...)
+ if err != nil {
+ return nil, err
+ }
+ return out, nil
}
-func (m *ResponseInfo) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if len(m.LastBlockAppHash) > 0 {
- i -= len(m.LastBlockAppHash)
- copy(dAtA[i:], m.LastBlockAppHash)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.LastBlockAppHash)))
- i--
- dAtA[i] = 0x2a
- }
- if m.LastBlockHeight != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.LastBlockHeight))
- i--
- dAtA[i] = 0x20
- }
- if m.AppVersion != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.AppVersion))
- i--
- dAtA[i] = 0x18
+func (c *aBCIApplicationClient) OfferSnapshot(ctx context.Context, in *RequestOfferSnapshot, opts ...grpc.CallOption) (*ResponseOfferSnapshot, error) {
+ out := new(ResponseOfferSnapshot)
+ err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/OfferSnapshot", in, out, opts...)
+ if err != nil {
+ return nil, err
}
- if len(m.Version) > 0 {
- i -= len(m.Version)
- copy(dAtA[i:], m.Version)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Version)))
- i--
- dAtA[i] = 0x12
+ return out, nil
+}
+
+func (c *aBCIApplicationClient) LoadSnapshotChunk(ctx context.Context, in *RequestLoadSnapshotChunk, opts ...grpc.CallOption) (*ResponseLoadSnapshotChunk, error) {
+ out := new(ResponseLoadSnapshotChunk)
+ err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/LoadSnapshotChunk", in, out, opts...)
+ if err != nil {
+ return nil, err
}
- if len(m.Data) > 0 {
- i -= len(m.Data)
- copy(dAtA[i:], m.Data)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Data)))
- i--
- dAtA[i] = 0xa
+ return out, nil
+}
+
+func (c *aBCIApplicationClient) ApplySnapshotChunk(ctx context.Context, in *RequestApplySnapshotChunk, opts ...grpc.CallOption) (*ResponseApplySnapshotChunk, error) {
+ out := new(ResponseApplySnapshotChunk)
+ err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/ApplySnapshotChunk", in, out, opts...)
+ if err != nil {
+ return nil, err
}
- return len(dAtA) - i, nil
+ return out, nil
}
-func (m *ResponseInitChain) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
+func (c *aBCIApplicationClient) PrepareProposal(ctx context.Context, in *RequestPrepareProposal, opts ...grpc.CallOption) (*ResponsePrepareProposal, error) {
+ out := new(ResponsePrepareProposal)
+ err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/PrepareProposal", in, out, opts...)
if err != nil {
return nil, err
}
- return dAtA[:n], nil
+ return out, nil
}
-func (m *ResponseInitChain) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func (c *aBCIApplicationClient) ProcessProposal(ctx context.Context, in *RequestProcessProposal, opts ...grpc.CallOption) (*ResponseProcessProposal, error) {
+ out := new(ResponseProcessProposal)
+ err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/ProcessProposal", in, out, opts...)
+ if err != nil {
+ return nil, err
+ }
+ return out, nil
}
-func (m *ResponseInitChain) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.InitialCoreHeight != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.InitialCoreHeight))
- i--
- dAtA[i] = 0x6
- i--
- dAtA[i] = 0xb0
- }
- if m.NextCoreChainLockUpdate != nil {
- {
- size, err := m.NextCoreChainLockUpdate.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x6
- i--
- dAtA[i] = 0xaa
- }
- {
- size, err := m.ValidatorSetUpdate.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x6
- i--
- dAtA[i] = 0xa2
- if len(m.AppHash) > 0 {
- i -= len(m.AppHash)
- copy(dAtA[i:], m.AppHash)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.AppHash)))
- i--
- dAtA[i] = 0x1a
+func (c *aBCIApplicationClient) ExtendVote(ctx context.Context, in *RequestExtendVote, opts ...grpc.CallOption) (*ResponseExtendVote, error) {
+ out := new(ResponseExtendVote)
+ err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/ExtendVote", in, out, opts...)
+ if err != nil {
+ return nil, err
}
- if m.ConsensusParams != nil {
- {
- size, err := m.ConsensusParams.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0xa
+ return out, nil
+}
+
+func (c *aBCIApplicationClient) VerifyVoteExtension(ctx context.Context, in *RequestVerifyVoteExtension, opts ...grpc.CallOption) (*ResponseVerifyVoteExtension, error) {
+ out := new(ResponseVerifyVoteExtension)
+ err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/VerifyVoteExtension", in, out, opts...)
+ if err != nil {
+ return nil, err
}
- return len(dAtA) - i, nil
+ return out, nil
}
-func (m *ResponseQuery) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
+func (c *aBCIApplicationClient) FinalizeBlock(ctx context.Context, in *RequestFinalizeBlock, opts ...grpc.CallOption) (*ResponseFinalizeBlock, error) {
+ out := new(ResponseFinalizeBlock)
+ err := c.cc.Invoke(ctx, "/tendermint.abci.ABCIApplication/FinalizeBlock", in, out, opts...)
if err != nil {
return nil, err
}
- return dAtA[:n], nil
+ return out, nil
}
-func (m *ResponseQuery) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+// ABCIApplicationServer is the server API for ABCIApplication service.
+type ABCIApplicationServer interface {
+ Echo(context.Context, *RequestEcho) (*ResponseEcho, error)
+ Flush(context.Context, *RequestFlush) (*ResponseFlush, error)
+ Info(context.Context, *RequestInfo) (*ResponseInfo, error)
+ CheckTx(context.Context, *RequestCheckTx) (*ResponseCheckTx, error)
+ Query(context.Context, *RequestQuery) (*ResponseQuery, error)
+ Commit(context.Context, *RequestCommit) (*ResponseCommit, error)
+ InitChain(context.Context, *RequestInitChain) (*ResponseInitChain, error)
+ ListSnapshots(context.Context, *RequestListSnapshots) (*ResponseListSnapshots, error)
+ OfferSnapshot(context.Context, *RequestOfferSnapshot) (*ResponseOfferSnapshot, error)
+ LoadSnapshotChunk(context.Context, *RequestLoadSnapshotChunk) (*ResponseLoadSnapshotChunk, error)
+ ApplySnapshotChunk(context.Context, *RequestApplySnapshotChunk) (*ResponseApplySnapshotChunk, error)
+ PrepareProposal(context.Context, *RequestPrepareProposal) (*ResponsePrepareProposal, error)
+ ProcessProposal(context.Context, *RequestProcessProposal) (*ResponseProcessProposal, error)
+ ExtendVote(context.Context, *RequestExtendVote) (*ResponseExtendVote, error)
+ VerifyVoteExtension(context.Context, *RequestVerifyVoteExtension) (*ResponseVerifyVoteExtension, error)
+ FinalizeBlock(context.Context, *RequestFinalizeBlock) (*ResponseFinalizeBlock, error)
}
-func (m *ResponseQuery) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if len(m.Codespace) > 0 {
- i -= len(m.Codespace)
- copy(dAtA[i:], m.Codespace)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Codespace)))
- i--
- dAtA[i] = 0x52
- }
- if m.Height != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Height))
- i--
- dAtA[i] = 0x48
+// UnimplementedABCIApplicationServer can be embedded to have forward compatible implementations.
+type UnimplementedABCIApplicationServer struct {
+}
+
+func (*UnimplementedABCIApplicationServer) Echo(ctx context.Context, req *RequestEcho) (*ResponseEcho, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method Echo not implemented")
+}
+func (*UnimplementedABCIApplicationServer) Flush(ctx context.Context, req *RequestFlush) (*ResponseFlush, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method Flush not implemented")
+}
+func (*UnimplementedABCIApplicationServer) Info(ctx context.Context, req *RequestInfo) (*ResponseInfo, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method Info not implemented")
+}
+func (*UnimplementedABCIApplicationServer) CheckTx(ctx context.Context, req *RequestCheckTx) (*ResponseCheckTx, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method CheckTx not implemented")
+}
+func (*UnimplementedABCIApplicationServer) Query(ctx context.Context, req *RequestQuery) (*ResponseQuery, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method Query not implemented")
+}
+func (*UnimplementedABCIApplicationServer) Commit(ctx context.Context, req *RequestCommit) (*ResponseCommit, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method Commit not implemented")
+}
+func (*UnimplementedABCIApplicationServer) InitChain(ctx context.Context, req *RequestInitChain) (*ResponseInitChain, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method InitChain not implemented")
+}
+func (*UnimplementedABCIApplicationServer) ListSnapshots(ctx context.Context, req *RequestListSnapshots) (*ResponseListSnapshots, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method ListSnapshots not implemented")
+}
+func (*UnimplementedABCIApplicationServer) OfferSnapshot(ctx context.Context, req *RequestOfferSnapshot) (*ResponseOfferSnapshot, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method OfferSnapshot not implemented")
+}
+func (*UnimplementedABCIApplicationServer) LoadSnapshotChunk(ctx context.Context, req *RequestLoadSnapshotChunk) (*ResponseLoadSnapshotChunk, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method LoadSnapshotChunk not implemented")
+}
+func (*UnimplementedABCIApplicationServer) ApplySnapshotChunk(ctx context.Context, req *RequestApplySnapshotChunk) (*ResponseApplySnapshotChunk, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method ApplySnapshotChunk not implemented")
+}
+func (*UnimplementedABCIApplicationServer) PrepareProposal(ctx context.Context, req *RequestPrepareProposal) (*ResponsePrepareProposal, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method PrepareProposal not implemented")
+}
+func (*UnimplementedABCIApplicationServer) ProcessProposal(ctx context.Context, req *RequestProcessProposal) (*ResponseProcessProposal, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method ProcessProposal not implemented")
+}
+func (*UnimplementedABCIApplicationServer) ExtendVote(ctx context.Context, req *RequestExtendVote) (*ResponseExtendVote, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method ExtendVote not implemented")
+}
+func (*UnimplementedABCIApplicationServer) VerifyVoteExtension(ctx context.Context, req *RequestVerifyVoteExtension) (*ResponseVerifyVoteExtension, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method VerifyVoteExtension not implemented")
+}
+func (*UnimplementedABCIApplicationServer) FinalizeBlock(ctx context.Context, req *RequestFinalizeBlock) (*ResponseFinalizeBlock, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method FinalizeBlock not implemented")
+}
+
+func RegisterABCIApplicationServer(s *grpc.Server, srv ABCIApplicationServer) {
+ s.RegisterService(&_ABCIApplication_serviceDesc, srv)
+}
+
+func _ABCIApplication_Echo_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(RequestEcho)
+ if err := dec(in); err != nil {
+ return nil, err
}
- if m.ProofOps != nil {
- {
- size, err := m.ProofOps.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x42
+ if interceptor == nil {
+ return srv.(ABCIApplicationServer).Echo(ctx, in)
}
- if len(m.Value) > 0 {
- i -= len(m.Value)
- copy(dAtA[i:], m.Value)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Value)))
- i--
- dAtA[i] = 0x3a
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/tendermint.abci.ABCIApplication/Echo",
}
- if len(m.Key) > 0 {
- i -= len(m.Key)
- copy(dAtA[i:], m.Key)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Key)))
- i--
- dAtA[i] = 0x32
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(ABCIApplicationServer).Echo(ctx, req.(*RequestEcho))
}
- if m.Index != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Index))
- i--
- dAtA[i] = 0x28
+ return interceptor(ctx, in, info, handler)
+}
+
+func _ABCIApplication_Flush_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(RequestFlush)
+ if err := dec(in); err != nil {
+ return nil, err
}
- if len(m.Info) > 0 {
- i -= len(m.Info)
- copy(dAtA[i:], m.Info)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Info)))
- i--
- dAtA[i] = 0x22
+ if interceptor == nil {
+ return srv.(ABCIApplicationServer).Flush(ctx, in)
}
- if len(m.Log) > 0 {
- i -= len(m.Log)
- copy(dAtA[i:], m.Log)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Log)))
- i--
- dAtA[i] = 0x1a
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/tendermint.abci.ABCIApplication/Flush",
}
- if m.Code != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Code))
- i--
- dAtA[i] = 0x8
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(ABCIApplicationServer).Flush(ctx, req.(*RequestFlush))
}
- return len(dAtA) - i, nil
+ return interceptor(ctx, in, info, handler)
}
-func (m *ResponseBeginBlock) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
+func _ABCIApplication_Info_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(RequestInfo)
+ if err := dec(in); err != nil {
return nil, err
}
- return dAtA[:n], nil
+ if interceptor == nil {
+ return srv.(ABCIApplicationServer).Info(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/tendermint.abci.ABCIApplication/Info",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(ABCIApplicationServer).Info(ctx, req.(*RequestInfo))
+ }
+ return interceptor(ctx, in, info, handler)
}
-func (m *ResponseBeginBlock) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func _ABCIApplication_CheckTx_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(RequestCheckTx)
+ if err := dec(in); err != nil {
+ return nil, err
+ }
+ if interceptor == nil {
+ return srv.(ABCIApplicationServer).CheckTx(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/tendermint.abci.ABCIApplication/CheckTx",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(ABCIApplicationServer).CheckTx(ctx, req.(*RequestCheckTx))
+ }
+ return interceptor(ctx, in, info, handler)
}
-func (m *ResponseBeginBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if len(m.Events) > 0 {
- for iNdEx := len(m.Events) - 1; iNdEx >= 0; iNdEx-- {
- {
- size, err := m.Events[iNdEx].MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0xa
- }
+func _ABCIApplication_Query_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(RequestQuery)
+ if err := dec(in); err != nil {
+ return nil, err
}
- return len(dAtA) - i, nil
+ if interceptor == nil {
+ return srv.(ABCIApplicationServer).Query(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/tendermint.abci.ABCIApplication/Query",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(ABCIApplicationServer).Query(ctx, req.(*RequestQuery))
+ }
+ return interceptor(ctx, in, info, handler)
}
-func (m *ResponseCheckTx) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
+func _ABCIApplication_Commit_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(RequestCommit)
+ if err := dec(in); err != nil {
return nil, err
}
- return dAtA[:n], nil
+ if interceptor == nil {
+ return srv.(ABCIApplicationServer).Commit(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/tendermint.abci.ABCIApplication/Commit",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(ABCIApplicationServer).Commit(ctx, req.(*RequestCommit))
+ }
+ return interceptor(ctx, in, info, handler)
}
-func (m *ResponseCheckTx) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func _ABCIApplication_InitChain_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(RequestInitChain)
+ if err := dec(in); err != nil {
+ return nil, err
+ }
+ if interceptor == nil {
+ return srv.(ABCIApplicationServer).InitChain(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/tendermint.abci.ABCIApplication/InitChain",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(ABCIApplicationServer).InitChain(ctx, req.(*RequestInitChain))
+ }
+ return interceptor(ctx, in, info, handler)
}
-func (m *ResponseCheckTx) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if len(m.MempoolError) > 0 {
- i -= len(m.MempoolError)
- copy(dAtA[i:], m.MempoolError)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.MempoolError)))
- i--
- dAtA[i] = 0x5a
+func _ABCIApplication_ListSnapshots_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(RequestListSnapshots)
+ if err := dec(in); err != nil {
+ return nil, err
}
- if m.Priority != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Priority))
- i--
- dAtA[i] = 0x50
+ if interceptor == nil {
+ return srv.(ABCIApplicationServer).ListSnapshots(ctx, in)
}
- if len(m.Sender) > 0 {
- i -= len(m.Sender)
- copy(dAtA[i:], m.Sender)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Sender)))
- i--
- dAtA[i] = 0x4a
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/tendermint.abci.ABCIApplication/ListSnapshots",
}
- if len(m.Codespace) > 0 {
- i -= len(m.Codespace)
- copy(dAtA[i:], m.Codespace)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Codespace)))
- i--
- dAtA[i] = 0x42
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(ABCIApplicationServer).ListSnapshots(ctx, req.(*RequestListSnapshots))
}
- if len(m.Events) > 0 {
- for iNdEx := len(m.Events) - 1; iNdEx >= 0; iNdEx-- {
- {
- size, err := m.Events[iNdEx].MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x3a
- }
+ return interceptor(ctx, in, info, handler)
+}
+
+func _ABCIApplication_OfferSnapshot_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(RequestOfferSnapshot)
+ if err := dec(in); err != nil {
+ return nil, err
}
- if m.GasUsed != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.GasUsed))
- i--
- dAtA[i] = 0x30
+ if interceptor == nil {
+ return srv.(ABCIApplicationServer).OfferSnapshot(ctx, in)
}
- if m.GasWanted != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.GasWanted))
- i--
- dAtA[i] = 0x28
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/tendermint.abci.ABCIApplication/OfferSnapshot",
}
- if len(m.Info) > 0 {
- i -= len(m.Info)
- copy(dAtA[i:], m.Info)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Info)))
- i--
- dAtA[i] = 0x22
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(ABCIApplicationServer).OfferSnapshot(ctx, req.(*RequestOfferSnapshot))
}
- if len(m.Log) > 0 {
- i -= len(m.Log)
- copy(dAtA[i:], m.Log)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Log)))
- i--
- dAtA[i] = 0x1a
+ return interceptor(ctx, in, info, handler)
+}
+
+func _ABCIApplication_LoadSnapshotChunk_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(RequestLoadSnapshotChunk)
+ if err := dec(in); err != nil {
+ return nil, err
}
- if len(m.Data) > 0 {
- i -= len(m.Data)
- copy(dAtA[i:], m.Data)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Data)))
- i--
- dAtA[i] = 0x12
+ if interceptor == nil {
+ return srv.(ABCIApplicationServer).LoadSnapshotChunk(ctx, in)
}
- if m.Code != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Code))
- i--
- dAtA[i] = 0x8
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/tendermint.abci.ABCIApplication/LoadSnapshotChunk",
}
- return len(dAtA) - i, nil
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(ABCIApplicationServer).LoadSnapshotChunk(ctx, req.(*RequestLoadSnapshotChunk))
+ }
+ return interceptor(ctx, in, info, handler)
}
-func (m *ResponseDeliverTx) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
+func _ABCIApplication_ApplySnapshotChunk_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(RequestApplySnapshotChunk)
+ if err := dec(in); err != nil {
return nil, err
}
- return dAtA[:n], nil
+ if interceptor == nil {
+ return srv.(ABCIApplicationServer).ApplySnapshotChunk(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/tendermint.abci.ABCIApplication/ApplySnapshotChunk",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(ABCIApplicationServer).ApplySnapshotChunk(ctx, req.(*RequestApplySnapshotChunk))
+ }
+ return interceptor(ctx, in, info, handler)
}
-func (m *ResponseDeliverTx) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
+func _ABCIApplication_PrepareProposal_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(RequestPrepareProposal)
+ if err := dec(in); err != nil {
+ return nil, err
+ }
+ if interceptor == nil {
+ return srv.(ABCIApplicationServer).PrepareProposal(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/tendermint.abci.ABCIApplication/PrepareProposal",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(ABCIApplicationServer).PrepareProposal(ctx, req.(*RequestPrepareProposal))
+ }
+ return interceptor(ctx, in, info, handler)
}
-func (m *ResponseDeliverTx) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if len(m.Codespace) > 0 {
- i -= len(m.Codespace)
- copy(dAtA[i:], m.Codespace)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Codespace)))
- i--
- dAtA[i] = 0x42
+func _ABCIApplication_ProcessProposal_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(RequestProcessProposal)
+ if err := dec(in); err != nil {
+ return nil, err
}
- if len(m.Events) > 0 {
- for iNdEx := len(m.Events) - 1; iNdEx >= 0; iNdEx-- {
- {
- size, err := m.Events[iNdEx].MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x3a
- }
+ if interceptor == nil {
+ return srv.(ABCIApplicationServer).ProcessProposal(ctx, in)
}
- if m.GasUsed != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.GasUsed))
- i--
- dAtA[i] = 0x30
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/tendermint.abci.ABCIApplication/ProcessProposal",
}
- if m.GasWanted != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.GasWanted))
- i--
- dAtA[i] = 0x28
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(ABCIApplicationServer).ProcessProposal(ctx, req.(*RequestProcessProposal))
}
- if len(m.Info) > 0 {
- i -= len(m.Info)
- copy(dAtA[i:], m.Info)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Info)))
- i--
- dAtA[i] = 0x22
+ return interceptor(ctx, in, info, handler)
+}
+
+func _ABCIApplication_ExtendVote_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(RequestExtendVote)
+ if err := dec(in); err != nil {
+ return nil, err
}
- if len(m.Log) > 0 {
- i -= len(m.Log)
- copy(dAtA[i:], m.Log)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Log)))
- i--
- dAtA[i] = 0x1a
+ if interceptor == nil {
+ return srv.(ABCIApplicationServer).ExtendVote(ctx, in)
}
- if len(m.Data) > 0 {
- i -= len(m.Data)
- copy(dAtA[i:], m.Data)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Data)))
- i--
- dAtA[i] = 0x12
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/tendermint.abci.ABCIApplication/ExtendVote",
}
- if m.Code != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Code))
- i--
- dAtA[i] = 0x8
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(ABCIApplicationServer).ExtendVote(ctx, req.(*RequestExtendVote))
}
- return len(dAtA) - i, nil
+ return interceptor(ctx, in, info, handler)
}
-func (m *ResponseEndBlock) Marshal() (dAtA []byte, err error) {
+func _ABCIApplication_VerifyVoteExtension_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(RequestVerifyVoteExtension)
+ if err := dec(in); err != nil {
+ return nil, err
+ }
+ if interceptor == nil {
+ return srv.(ABCIApplicationServer).VerifyVoteExtension(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/tendermint.abci.ABCIApplication/VerifyVoteExtension",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(ABCIApplicationServer).VerifyVoteExtension(ctx, req.(*RequestVerifyVoteExtension))
+ }
+ return interceptor(ctx, in, info, handler)
+}
+
+func _ABCIApplication_FinalizeBlock_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(RequestFinalizeBlock)
+ if err := dec(in); err != nil {
+ return nil, err
+ }
+ if interceptor == nil {
+ return srv.(ABCIApplicationServer).FinalizeBlock(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/tendermint.abci.ABCIApplication/FinalizeBlock",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(ABCIApplicationServer).FinalizeBlock(ctx, req.(*RequestFinalizeBlock))
+ }
+ return interceptor(ctx, in, info, handler)
+}
+
+var _ABCIApplication_serviceDesc = grpc.ServiceDesc{
+ ServiceName: "tendermint.abci.ABCIApplication",
+ HandlerType: (*ABCIApplicationServer)(nil),
+ Methods: []grpc.MethodDesc{
+ {
+ MethodName: "Echo",
+ Handler: _ABCIApplication_Echo_Handler,
+ },
+ {
+ MethodName: "Flush",
+ Handler: _ABCIApplication_Flush_Handler,
+ },
+ {
+ MethodName: "Info",
+ Handler: _ABCIApplication_Info_Handler,
+ },
+ {
+ MethodName: "CheckTx",
+ Handler: _ABCIApplication_CheckTx_Handler,
+ },
+ {
+ MethodName: "Query",
+ Handler: _ABCIApplication_Query_Handler,
+ },
+ {
+ MethodName: "Commit",
+ Handler: _ABCIApplication_Commit_Handler,
+ },
+ {
+ MethodName: "InitChain",
+ Handler: _ABCIApplication_InitChain_Handler,
+ },
+ {
+ MethodName: "ListSnapshots",
+ Handler: _ABCIApplication_ListSnapshots_Handler,
+ },
+ {
+ MethodName: "OfferSnapshot",
+ Handler: _ABCIApplication_OfferSnapshot_Handler,
+ },
+ {
+ MethodName: "LoadSnapshotChunk",
+ Handler: _ABCIApplication_LoadSnapshotChunk_Handler,
+ },
+ {
+ MethodName: "ApplySnapshotChunk",
+ Handler: _ABCIApplication_ApplySnapshotChunk_Handler,
+ },
+ {
+ MethodName: "PrepareProposal",
+ Handler: _ABCIApplication_PrepareProposal_Handler,
+ },
+ {
+ MethodName: "ProcessProposal",
+ Handler: _ABCIApplication_ProcessProposal_Handler,
+ },
+ {
+ MethodName: "ExtendVote",
+ Handler: _ABCIApplication_ExtendVote_Handler,
+ },
+ {
+ MethodName: "VerifyVoteExtension",
+ Handler: _ABCIApplication_VerifyVoteExtension_Handler,
+ },
+ {
+ MethodName: "FinalizeBlock",
+ Handler: _ABCIApplication_FinalizeBlock_Handler,
+ },
+ },
+ Streams: []grpc.StreamDesc{},
+ Metadata: "tendermint/abci/types.proto",
+}
+
+func (m *Request) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
@@ -5673,33 +5331,38 @@ func (m *ResponseEndBlock) Marshal() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *ResponseEndBlock) MarshalTo(dAtA []byte) (int, error) {
+func (m *Request) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *ResponseEndBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+func (m *Request) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
- if m.ValidatorSetUpdate != nil {
+ if m.Value != nil {
{
- size, err := m.ValidatorSetUpdate.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
+ size := m.Value.Size()
+ i -= size
+ if _, err := m.Value.MarshalTo(dAtA[i:]); err != nil {
return 0, err
}
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
}
- i--
- dAtA[i] = 0x6
- i--
- dAtA[i] = 0xaa
}
- if m.NextCoreChainLockUpdate != nil {
+ return len(dAtA) - i, nil
+}
+
+func (m *Request_Echo) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Request_Echo) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ if m.Echo != nil {
{
- size, err := m.NextCoreChainLockUpdate.MarshalToSizedBuffer(dAtA[:i])
+ size, err := m.Echo.MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
@@ -5707,27 +5370,20 @@ func (m *ResponseEndBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i = encodeVarintTypes(dAtA, i, uint64(size))
}
i--
- dAtA[i] = 0x6
- i--
- dAtA[i] = 0xa2
- }
- if len(m.Events) > 0 {
- for iNdEx := len(m.Events) - 1; iNdEx >= 0; iNdEx-- {
- {
- size, err := m.Events[iNdEx].MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x1a
- }
+ dAtA[i] = 0xa
}
- if m.ConsensusParamUpdates != nil {
+ return len(dAtA) - i, nil
+}
+func (m *Request_Flush) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Request_Flush) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ if m.Flush != nil {
{
- size, err := m.ConsensusParamUpdates.MarshalToSizedBuffer(dAtA[:i])
+ size, err := m.Flush.MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
@@ -5739,333 +5395,372 @@ func (m *ResponseEndBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
}
return len(dAtA) - i, nil
}
-
-func (m *ResponseCommit) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *ResponseCommit) MarshalTo(dAtA []byte) (int, error) {
+func (m *Request_Info) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *ResponseCommit) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+func (m *Request_Info) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.RetainHeight != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.RetainHeight))
- i--
- dAtA[i] = 0x18
- }
- if len(m.Data) > 0 {
- i -= len(m.Data)
- copy(dAtA[i:], m.Data)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Data)))
+ if m.Info != nil {
+ {
+ size, err := m.Info.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
i--
- dAtA[i] = 0x12
+ dAtA[i] = 0x1a
}
return len(dAtA) - i, nil
}
-
-func (m *ResponseListSnapshots) Marshal() (dAtA []byte, err error) {
+func (m *Request_InitChain) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
+ return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *ResponseListSnapshots) MarshalTo(dAtA []byte) (int, error) {
+func (m *Request_InitChain) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ if m.InitChain != nil {
+ {
+ size, err := m.InitChain.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x22
+ }
+ return len(dAtA) - i, nil
+}
+func (m *Request_Query) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *ResponseListSnapshots) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+func (m *Request_Query) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
- _ = i
- var l int
- _ = l
- if len(m.Snapshots) > 0 {
- for iNdEx := len(m.Snapshots) - 1; iNdEx >= 0; iNdEx-- {
- {
- size, err := m.Snapshots[iNdEx].MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
+ if m.Query != nil {
+ {
+ size, err := m.Query.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
}
- i--
- dAtA[i] = 0xa
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
}
+ i--
+ dAtA[i] = 0x2a
}
return len(dAtA) - i, nil
}
-
-func (m *ResponseOfferSnapshot) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *ResponseOfferSnapshot) MarshalTo(dAtA []byte) (int, error) {
+func (m *Request_BeginBlock) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *ResponseOfferSnapshot) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+func (m *Request_BeginBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.Result != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Result))
+ if m.BeginBlock != nil {
+ {
+ size, err := m.BeginBlock.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
i--
- dAtA[i] = 0x8
+ dAtA[i] = 0x32
}
return len(dAtA) - i, nil
}
-
-func (m *ResponseLoadSnapshotChunk) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *ResponseLoadSnapshotChunk) MarshalTo(dAtA []byte) (int, error) {
+func (m *Request_CheckTx) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *ResponseLoadSnapshotChunk) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+func (m *Request_CheckTx) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
- _ = i
- var l int
- _ = l
- if len(m.Chunk) > 0 {
- i -= len(m.Chunk)
- copy(dAtA[i:], m.Chunk)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Chunk)))
+ if m.CheckTx != nil {
+ {
+ size, err := m.CheckTx.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
i--
- dAtA[i] = 0xa
+ dAtA[i] = 0x3a
}
return len(dAtA) - i, nil
}
-
-func (m *ResponseApplySnapshotChunk) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *ResponseApplySnapshotChunk) MarshalTo(dAtA []byte) (int, error) {
+func (m *Request_DeliverTx) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *ResponseApplySnapshotChunk) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+func (m *Request_DeliverTx) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
- _ = i
- var l int
- _ = l
- if len(m.RejectSenders) > 0 {
- for iNdEx := len(m.RejectSenders) - 1; iNdEx >= 0; iNdEx-- {
- i -= len(m.RejectSenders[iNdEx])
- copy(dAtA[i:], m.RejectSenders[iNdEx])
- i = encodeVarintTypes(dAtA, i, uint64(len(m.RejectSenders[iNdEx])))
- i--
- dAtA[i] = 0x1a
- }
- }
- if len(m.RefetchChunks) > 0 {
- dAtA44 := make([]byte, len(m.RefetchChunks)*10)
- var j43 int
- for _, num := range m.RefetchChunks {
- for num >= 1<<7 {
- dAtA44[j43] = uint8(uint64(num)&0x7f | 0x80)
- num >>= 7
- j43++
+ if m.DeliverTx != nil {
+ {
+ size, err := m.DeliverTx.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
}
- dAtA44[j43] = uint8(num)
- j43++
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
}
- i -= j43
- copy(dAtA[i:], dAtA44[:j43])
- i = encodeVarintTypes(dAtA, i, uint64(j43))
- i--
- dAtA[i] = 0x12
- }
- if m.Result != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Result))
i--
- dAtA[i] = 0x8
+ dAtA[i] = 0x42
}
return len(dAtA) - i, nil
}
-
-func (m *LastCommitInfo) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *LastCommitInfo) MarshalTo(dAtA []byte) (int, error) {
+func (m *Request_EndBlock) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *LastCommitInfo) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+func (m *Request_EndBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
- _ = i
- var l int
- _ = l
- if len(m.StateSignature) > 0 {
- i -= len(m.StateSignature)
- copy(dAtA[i:], m.StateSignature)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.StateSignature)))
- i--
- dAtA[i] = 0x2a
- }
- if len(m.BlockSignature) > 0 {
- i -= len(m.BlockSignature)
- copy(dAtA[i:], m.BlockSignature)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.BlockSignature)))
- i--
- dAtA[i] = 0x22
- }
- if len(m.QuorumHash) > 0 {
- i -= len(m.QuorumHash)
- copy(dAtA[i:], m.QuorumHash)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.QuorumHash)))
- i--
- dAtA[i] = 0x1a
- }
- if m.Round != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Round))
+ if m.EndBlock != nil {
+ {
+ size, err := m.EndBlock.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
i--
- dAtA[i] = 0x8
+ dAtA[i] = 0x4a
}
return len(dAtA) - i, nil
}
-
-func (m *Event) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *Event) MarshalTo(dAtA []byte) (int, error) {
+func (m *Request_Commit) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *Event) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+func (m *Request_Commit) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
- _ = i
- var l int
- _ = l
- if len(m.Attributes) > 0 {
- for iNdEx := len(m.Attributes) - 1; iNdEx >= 0; iNdEx-- {
- {
- size, err := m.Attributes[iNdEx].MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
+ if m.Commit != nil {
+ {
+ size, err := m.Commit.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
}
- i--
- dAtA[i] = 0x12
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
}
- }
- if len(m.Type) > 0 {
- i -= len(m.Type)
- copy(dAtA[i:], m.Type)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Type)))
i--
- dAtA[i] = 0xa
+ dAtA[i] = 0x52
}
return len(dAtA) - i, nil
}
-
-func (m *EventAttribute) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *EventAttribute) MarshalTo(dAtA []byte) (int, error) {
+func (m *Request_ListSnapshots) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *EventAttribute) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+func (m *Request_ListSnapshots) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.Index {
- i--
- if m.Index {
- dAtA[i] = 1
- } else {
- dAtA[i] = 0
+ if m.ListSnapshots != nil {
+ {
+ size, err := m.ListSnapshots.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
}
i--
- dAtA[i] = 0x18
+ dAtA[i] = 0x5a
}
- if len(m.Value) > 0 {
- i -= len(m.Value)
- copy(dAtA[i:], m.Value)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Value)))
+ return len(dAtA) - i, nil
+}
+func (m *Request_OfferSnapshot) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Request_OfferSnapshot) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ if m.OfferSnapshot != nil {
+ {
+ size, err := m.OfferSnapshot.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
i--
- dAtA[i] = 0x12
+ dAtA[i] = 0x62
}
- if len(m.Key) > 0 {
- i -= len(m.Key)
- copy(dAtA[i:], m.Key)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Key)))
+ return len(dAtA) - i, nil
+}
+func (m *Request_LoadSnapshotChunk) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Request_LoadSnapshotChunk) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ if m.LoadSnapshotChunk != nil {
+ {
+ size, err := m.LoadSnapshotChunk.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
i--
- dAtA[i] = 0xa
+ dAtA[i] = 0x6a
}
return len(dAtA) - i, nil
}
+func (m *Request_ApplySnapshotChunk) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
-func (m *TxResult) Marshal() (dAtA []byte, err error) {
+func (m *Request_ApplySnapshotChunk) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ if m.ApplySnapshotChunk != nil {
+ {
+ size, err := m.ApplySnapshotChunk.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x72
+ }
+ return len(dAtA) - i, nil
+}
+func (m *Request_PrepareProposal) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Request_PrepareProposal) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ if m.PrepareProposal != nil {
+ {
+ size, err := m.PrepareProposal.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x7a
+ }
+ return len(dAtA) - i, nil
+}
+func (m *Request_ProcessProposal) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Request_ProcessProposal) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ if m.ProcessProposal != nil {
+ {
+ size, err := m.ProcessProposal.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0x82
+ }
+ return len(dAtA) - i, nil
+}
+func (m *Request_ExtendVote) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Request_ExtendVote) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ if m.ExtendVote != nil {
+ {
+ size, err := m.ExtendVote.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0x8a
+ }
+ return len(dAtA) - i, nil
+}
+func (m *Request_VerifyVoteExtension) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Request_VerifyVoteExtension) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ if m.VerifyVoteExtension != nil {
+ {
+ size, err := m.VerifyVoteExtension.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0x92
+ }
+ return len(dAtA) - i, nil
+}
+func (m *Request_FinalizeBlock) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Request_FinalizeBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ if m.FinalizeBlock != nil {
+ {
+ size, err := m.FinalizeBlock.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0x9a
+ }
+ return len(dAtA) - i, nil
+}
+func (m *RequestEcho) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
@@ -6075,47 +5770,27 @@ func (m *TxResult) Marshal() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *TxResult) MarshalTo(dAtA []byte) (int, error) {
+func (m *RequestEcho) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *TxResult) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+func (m *RequestEcho) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
- {
- size, err := m.Result.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x22
- if len(m.Tx) > 0 {
- i -= len(m.Tx)
- copy(dAtA[i:], m.Tx)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Tx)))
- i--
- dAtA[i] = 0x1a
- }
- if m.Index != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Index))
- i--
- dAtA[i] = 0x10
- }
- if m.Height != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Height))
+ if len(m.Message) > 0 {
+ i -= len(m.Message)
+ copy(dAtA[i:], m.Message)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Message)))
i--
- dAtA[i] = 0x8
+ dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
-func (m *Validator) Marshal() (dAtA []byte, err error) {
+func (m *RequestFlush) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
@@ -6125,32 +5800,20 @@ func (m *Validator) Marshal() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Validator) MarshalTo(dAtA []byte) (int, error) {
+func (m *RequestFlush) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *Validator) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+func (m *RequestFlush) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
- if len(m.ProTxHash) > 0 {
- i -= len(m.ProTxHash)
- copy(dAtA[i:], m.ProTxHash)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.ProTxHash)))
- i--
- dAtA[i] = 0x22
- }
- if m.Power != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Power))
- i--
- dAtA[i] = 0x18
- }
return len(dAtA) - i, nil
}
-func (m *ValidatorUpdate) Marshal() (dAtA []byte, err error) {
+func (m *RequestInfo) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
@@ -6160,51 +5823,44 @@ func (m *ValidatorUpdate) Marshal() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *ValidatorUpdate) MarshalTo(dAtA []byte) (int, error) {
+func (m *RequestInfo) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *ValidatorUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+func (m *RequestInfo) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
- if len(m.NodeAddress) > 0 {
- i -= len(m.NodeAddress)
- copy(dAtA[i:], m.NodeAddress)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.NodeAddress)))
+ if len(m.AbciVersion) > 0 {
+ i -= len(m.AbciVersion)
+ copy(dAtA[i:], m.AbciVersion)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.AbciVersion)))
i--
dAtA[i] = 0x22
}
- if len(m.ProTxHash) > 0 {
- i -= len(m.ProTxHash)
- copy(dAtA[i:], m.ProTxHash)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.ProTxHash)))
+ if m.P2PVersion != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.P2PVersion))
i--
- dAtA[i] = 0x1a
+ dAtA[i] = 0x18
}
- if m.Power != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Power))
+ if m.BlockVersion != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.BlockVersion))
i--
dAtA[i] = 0x10
}
- if m.PubKey != nil {
- {
- size, err := m.PubKey.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
+ if len(m.Version) > 0 {
+ i -= len(m.Version)
+ copy(dAtA[i:], m.Version)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Version)))
i--
dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
-func (m *ValidatorSetUpdate) Marshal() (dAtA []byte, err error) {
+func (m *RequestInitChain) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
@@ -6214,51 +5870,76 @@ func (m *ValidatorSetUpdate) Marshal() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *ValidatorSetUpdate) MarshalTo(dAtA []byte) (int, error) {
+func (m *RequestInitChain) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *ValidatorSetUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+func (m *RequestInitChain) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
- if len(m.QuorumHash) > 0 {
- i -= len(m.QuorumHash)
- copy(dAtA[i:], m.QuorumHash)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.QuorumHash)))
+ if m.InitialCoreHeight != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.InitialCoreHeight))
i--
- dAtA[i] = 0x1a
+ dAtA[i] = 0x38
}
- {
- size, err := m.ThresholdPublicKey.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
+ if m.InitialHeight != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.InitialHeight))
+ i--
+ dAtA[i] = 0x30
+ }
+ if len(m.AppStateBytes) > 0 {
+ i -= len(m.AppStateBytes)
+ copy(dAtA[i:], m.AppStateBytes)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.AppStateBytes)))
+ i--
+ dAtA[i] = 0x2a
+ }
+ if m.ValidatorSet != nil {
+ {
+ size, err := m.ValidatorSet.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
}
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0x22
}
- i--
- dAtA[i] = 0x12
- if len(m.ValidatorUpdates) > 0 {
- for iNdEx := len(m.ValidatorUpdates) - 1; iNdEx >= 0; iNdEx-- {
- {
- size, err := m.ValidatorUpdates[iNdEx].MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
+ if m.ConsensusParams != nil {
+ {
+ size, err := m.ConsensusParams.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
}
- i--
- dAtA[i] = 0xa
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
}
+ i--
+ dAtA[i] = 0x1a
+ }
+ if len(m.ChainId) > 0 {
+ i -= len(m.ChainId)
+ copy(dAtA[i:], m.ChainId)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.ChainId)))
+ i--
+ dAtA[i] = 0x12
+ }
+ n22, err22 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Time, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Time):])
+ if err22 != nil {
+ return 0, err22
}
+ i -= n22
+ i = encodeVarintTypes(dAtA, i, uint64(n22))
+ i--
+ dAtA[i] = 0xa
return len(dAtA) - i, nil
}
-func (m *ThresholdPublicKeyUpdate) Marshal() (dAtA []byte, err error) {
+func (m *RequestQuery) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
@@ -6268,30 +5949,49 @@ func (m *ThresholdPublicKeyUpdate) Marshal() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *ThresholdPublicKeyUpdate) MarshalTo(dAtA []byte) (int, error) {
+func (m *RequestQuery) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *ThresholdPublicKeyUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+func (m *RequestQuery) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
- {
- size, err := m.ThresholdPublicKey.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
+ if m.Prove {
+ i--
+ if m.Prove {
+ dAtA[i] = 1
+ } else {
+ dAtA[i] = 0
}
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
+ i--
+ dAtA[i] = 0x20
+ }
+ if m.Height != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Height))
+ i--
+ dAtA[i] = 0x18
+ }
+ if len(m.Path) > 0 {
+ i -= len(m.Path)
+ copy(dAtA[i:], m.Path)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Path)))
+ i--
+ dAtA[i] = 0x12
+ }
+ if len(m.Data) > 0 {
+ i -= len(m.Data)
+ copy(dAtA[i:], m.Data)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Data)))
+ i--
+ dAtA[i] = 0xa
}
- i--
- dAtA[i] = 0xa
return len(dAtA) - i, nil
}
-func (m *QuorumHashUpdate) Marshal() (dAtA []byte, err error) {
+func (m *RequestBeginBlock) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
@@ -6301,27 +6001,61 @@ func (m *QuorumHashUpdate) Marshal() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *QuorumHashUpdate) MarshalTo(dAtA []byte) (int, error) {
+func (m *RequestBeginBlock) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *QuorumHashUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+func (m *RequestBeginBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
- if len(m.QuorumHash) > 0 {
- i -= len(m.QuorumHash)
- copy(dAtA[i:], m.QuorumHash)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.QuorumHash)))
+ if len(m.ByzantineValidators) > 0 {
+ for iNdEx := len(m.ByzantineValidators) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.ByzantineValidators[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x22
+ }
+ }
+ {
+ size, err := m.LastCommitInfo.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x1a
+ {
+ size, err := m.Header.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x12
+ if len(m.Hash) > 0 {
+ i -= len(m.Hash)
+ copy(dAtA[i:], m.Hash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Hash)))
i--
dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
-func (m *VoteInfo) Marshal() (dAtA []byte, err error) {
+func (m *RequestCheckTx) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
@@ -6331,40 +6065,32 @@ func (m *VoteInfo) Marshal() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *VoteInfo) MarshalTo(dAtA []byte) (int, error) {
+func (m *RequestCheckTx) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *VoteInfo) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+func (m *RequestCheckTx) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
- if m.SignedLastBlock {
- i--
- if m.SignedLastBlock {
- dAtA[i] = 1
- } else {
- dAtA[i] = 0
- }
+ if m.Type != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Type))
i--
dAtA[i] = 0x10
}
- {
- size, err := m.Validator.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
+ if len(m.Tx) > 0 {
+ i -= len(m.Tx)
+ copy(dAtA[i:], m.Tx)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Tx)))
+ i--
+ dAtA[i] = 0xa
}
- i--
- dAtA[i] = 0xa
return len(dAtA) - i, nil
}
-func (m *Evidence) Marshal() (dAtA []byte, err error) {
+func (m *RequestDeliverTx) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
@@ -6374,53 +6100,27 @@ func (m *Evidence) Marshal() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Evidence) MarshalTo(dAtA []byte) (int, error) {
+func (m *RequestDeliverTx) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *Evidence) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+func (m *RequestDeliverTx) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
- if m.TotalVotingPower != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.TotalVotingPower))
- i--
- dAtA[i] = 0x28
- }
- n50, err50 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Time, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Time):])
- if err50 != nil {
- return 0, err50
- }
- i -= n50
- i = encodeVarintTypes(dAtA, i, uint64(n50))
- i--
- dAtA[i] = 0x22
- if m.Height != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Height))
- i--
- dAtA[i] = 0x18
- }
- {
- size, err := m.Validator.MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintTypes(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x12
- if m.Type != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Type))
+ if len(m.Tx) > 0 {
+ i -= len(m.Tx)
+ copy(dAtA[i:], m.Tx)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Tx)))
i--
- dAtA[i] = 0x8
+ dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
-func (m *Snapshot) Marshal() (dAtA []byte, err error) {
+func (m *RequestEndBlock) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
@@ -6430,47 +6130,16 @@ func (m *Snapshot) Marshal() (dAtA []byte, err error) {
return dAtA[:n], nil
}
-func (m *Snapshot) MarshalTo(dAtA []byte) (int, error) {
+func (m *RequestEndBlock) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *Snapshot) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+func (m *RequestEndBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
- if m.CoreChainLockedHeight != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.CoreChainLockedHeight))
- i--
- dAtA[i] = 0x6
- i--
- dAtA[i] = 0xa0
- }
- if len(m.Metadata) > 0 {
- i -= len(m.Metadata)
- copy(dAtA[i:], m.Metadata)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Metadata)))
- i--
- dAtA[i] = 0x2a
- }
- if len(m.Hash) > 0 {
- i -= len(m.Hash)
- copy(dAtA[i:], m.Hash)
- i = encodeVarintTypes(dAtA, i, uint64(len(m.Hash)))
- i--
- dAtA[i] = 0x22
- }
- if m.Chunks != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Chunks))
- i--
- dAtA[i] = 0x18
- }
- if m.Format != 0 {
- i = encodeVarintTypes(dAtA, i, uint64(m.Format))
- i--
- dAtA[i] = 0x10
- }
if m.Height != 0 {
i = encodeVarintTypes(dAtA, i, uint64(m.Height))
i--
@@ -6479,1212 +6148,7156 @@ func (m *Snapshot) MarshalToSizedBuffer(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
-func encodeVarintTypes(dAtA []byte, offset int, v uint64) int {
- offset -= sovTypes(v)
- base := offset
- for v >= 1<<7 {
- dAtA[offset] = uint8(v&0x7f | 0x80)
- v >>= 7
- offset++
+func (m *RequestCommit) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
}
- dAtA[offset] = uint8(v)
- return base
+ return dAtA[:n], nil
}
-func (m *Request) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if m.Value != nil {
- n += m.Value.Size()
- }
- return n
+
+func (m *RequestCommit) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *Request_Echo) Size() (n int) {
- if m == nil {
- return 0
- }
+func (m *RequestCommit) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
var l int
_ = l
- if m.Echo != nil {
- l = m.Echo.Size()
- n += 1 + l + sovTypes(uint64(l))
- }
- return n
+ return len(dAtA) - i, nil
}
-func (m *Request_Flush) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if m.Flush != nil {
- l = m.Flush.Size()
- n += 1 + l + sovTypes(uint64(l))
+
+func (m *RequestListSnapshots) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
}
- return n
+ return dAtA[:n], nil
}
-func (m *Request_Info) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if m.Info != nil {
- l = m.Info.Size()
- n += 1 + l + sovTypes(uint64(l))
- }
- return n
+
+func (m *RequestListSnapshots) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *Request_InitChain) Size() (n int) {
- if m == nil {
- return 0
- }
+
+func (m *RequestListSnapshots) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
var l int
_ = l
- if m.InitChain != nil {
- l = m.InitChain.Size()
- n += 1 + l + sovTypes(uint64(l))
- }
- return n
+ return len(dAtA) - i, nil
}
-func (m *Request_Query) Size() (n int) {
- if m == nil {
- return 0
+
+func (m *RequestOfferSnapshot) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
}
+ return dAtA[:n], nil
+}
+
+func (m *RequestOfferSnapshot) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *RequestOfferSnapshot) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
var l int
_ = l
- if m.Query != nil {
- l = m.Query.Size()
- n += 1 + l + sovTypes(uint64(l))
+ if len(m.AppHash) > 0 {
+ i -= len(m.AppHash)
+ copy(dAtA[i:], m.AppHash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.AppHash)))
+ i--
+ dAtA[i] = 0x12
}
- return n
+ if m.Snapshot != nil {
+ {
+ size, err := m.Snapshot.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0xa
+ }
+ return len(dAtA) - i, nil
}
-func (m *Request_BeginBlock) Size() (n int) {
- if m == nil {
- return 0
+
+func (m *RequestLoadSnapshotChunk) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
}
+ return dAtA[:n], nil
+}
+
+func (m *RequestLoadSnapshotChunk) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *RequestLoadSnapshotChunk) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
var l int
_ = l
- if m.BeginBlock != nil {
- l = m.BeginBlock.Size()
- n += 1 + l + sovTypes(uint64(l))
+ if m.Chunk != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Chunk))
+ i--
+ dAtA[i] = 0x18
}
- return n
-}
-func (m *Request_CheckTx) Size() (n int) {
- if m == nil {
- return 0
+ if m.Format != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Format))
+ i--
+ dAtA[i] = 0x10
}
- var l int
- _ = l
- if m.CheckTx != nil {
- l = m.CheckTx.Size()
- n += 1 + l + sovTypes(uint64(l))
+ if m.Height != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Height))
+ i--
+ dAtA[i] = 0x8
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *Request_DeliverTx) Size() (n int) {
- if m == nil {
- return 0
+
+func (m *RequestApplySnapshotChunk) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
}
+ return dAtA[:n], nil
+}
+
+func (m *RequestApplySnapshotChunk) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *RequestApplySnapshotChunk) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
var l int
_ = l
- if m.DeliverTx != nil {
- l = m.DeliverTx.Size()
- n += 1 + l + sovTypes(uint64(l))
+ if len(m.Sender) > 0 {
+ i -= len(m.Sender)
+ copy(dAtA[i:], m.Sender)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Sender)))
+ i--
+ dAtA[i] = 0x1a
}
- return n
-}
-func (m *Request_EndBlock) Size() (n int) {
- if m == nil {
- return 0
+ if len(m.Chunk) > 0 {
+ i -= len(m.Chunk)
+ copy(dAtA[i:], m.Chunk)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Chunk)))
+ i--
+ dAtA[i] = 0x12
}
- var l int
- _ = l
- if m.EndBlock != nil {
- l = m.EndBlock.Size()
- n += 1 + l + sovTypes(uint64(l))
+ if m.Index != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Index))
+ i--
+ dAtA[i] = 0x8
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *Request_Commit) Size() (n int) {
- if m == nil {
- return 0
+
+func (m *RequestPrepareProposal) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
}
+ return dAtA[:n], nil
+}
+
+func (m *RequestPrepareProposal) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *RequestPrepareProposal) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
var l int
_ = l
- if m.Commit != nil {
- l = m.Commit.Size()
- n += 1 + l + sovTypes(uint64(l))
+ if len(m.ProposerProTxHash) > 0 {
+ i -= len(m.ProposerProTxHash)
+ copy(dAtA[i:], m.ProposerProTxHash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.ProposerProTxHash)))
+ i--
+ dAtA[i] = 0x42
}
- return n
-}
-func (m *Request_ListSnapshots) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if m.ListSnapshots != nil {
- l = m.ListSnapshots.Size()
- n += 1 + l + sovTypes(uint64(l))
+ if len(m.NextValidatorsHash) > 0 {
+ i -= len(m.NextValidatorsHash)
+ copy(dAtA[i:], m.NextValidatorsHash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.NextValidatorsHash)))
+ i--
+ dAtA[i] = 0x3a
}
- return n
-}
-func (m *Request_OfferSnapshot) Size() (n int) {
- if m == nil {
- return 0
+ n26, err26 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Time, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Time):])
+ if err26 != nil {
+ return 0, err26
}
- var l int
- _ = l
- if m.OfferSnapshot != nil {
- l = m.OfferSnapshot.Size()
- n += 1 + l + sovTypes(uint64(l))
+ i -= n26
+ i = encodeVarintTypes(dAtA, i, uint64(n26))
+ i--
+ dAtA[i] = 0x32
+ if m.Height != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Height))
+ i--
+ dAtA[i] = 0x28
}
- return n
-}
-func (m *Request_LoadSnapshotChunk) Size() (n int) {
- if m == nil {
- return 0
+ if len(m.ByzantineValidators) > 0 {
+ for iNdEx := len(m.ByzantineValidators) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.ByzantineValidators[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x22
+ }
}
- var l int
- _ = l
- if m.LoadSnapshotChunk != nil {
- l = m.LoadSnapshotChunk.Size()
- n += 1 + l + sovTypes(uint64(l))
+ {
+ size, err := m.LocalLastCommit.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
}
- return n
-}
-func (m *Request_ApplySnapshotChunk) Size() (n int) {
- if m == nil {
- return 0
+ i--
+ dAtA[i] = 0x1a
+ if len(m.Txs) > 0 {
+ for iNdEx := len(m.Txs) - 1; iNdEx >= 0; iNdEx-- {
+ i -= len(m.Txs[iNdEx])
+ copy(dAtA[i:], m.Txs[iNdEx])
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Txs[iNdEx])))
+ i--
+ dAtA[i] = 0x12
+ }
}
- var l int
- _ = l
- if m.ApplySnapshotChunk != nil {
- l = m.ApplySnapshotChunk.Size()
- n += 1 + l + sovTypes(uint64(l))
+ if m.MaxTxBytes != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.MaxTxBytes))
+ i--
+ dAtA[i] = 0x8
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *RequestEcho) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- l = len(m.Message)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+
+func (m *RequestProcessProposal) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
}
- return n
+ return dAtA[:n], nil
}
-func (m *RequestFlush) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- return n
+func (m *RequestProcessProposal) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *RequestInfo) Size() (n int) {
- if m == nil {
- return 0
- }
+func (m *RequestProcessProposal) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
var l int
_ = l
- l = len(m.Version)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
- }
- if m.BlockVersion != 0 {
- n += 1 + sovTypes(uint64(m.BlockVersion))
- }
- if m.P2PVersion != 0 {
- n += 1 + sovTypes(uint64(m.P2PVersion))
- }
- l = len(m.AbciVersion)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ if len(m.ProposerProTxHash) > 0 {
+ i -= len(m.ProposerProTxHash)
+ copy(dAtA[i:], m.ProposerProTxHash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.ProposerProTxHash)))
+ i--
+ dAtA[i] = 0x42
}
- return n
-}
-
-func (m *RequestInitChain) Size() (n int) {
- if m == nil {
- return 0
+ if len(m.NextValidatorsHash) > 0 {
+ i -= len(m.NextValidatorsHash)
+ copy(dAtA[i:], m.NextValidatorsHash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.NextValidatorsHash)))
+ i--
+ dAtA[i] = 0x3a
}
- var l int
- _ = l
- l = github_com_gogo_protobuf_types.SizeOfStdTime(m.Time)
- n += 1 + l + sovTypes(uint64(l))
- l = len(m.ChainId)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ n28, err28 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Time, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Time):])
+ if err28 != nil {
+ return 0, err28
}
- if m.ConsensusParams != nil {
- l = m.ConsensusParams.Size()
- n += 1 + l + sovTypes(uint64(l))
+ i -= n28
+ i = encodeVarintTypes(dAtA, i, uint64(n28))
+ i--
+ dAtA[i] = 0x32
+ if m.Height != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Height))
+ i--
+ dAtA[i] = 0x28
}
- if m.ValidatorSet != nil {
- l = m.ValidatorSet.Size()
- n += 1 + l + sovTypes(uint64(l))
+ if len(m.Hash) > 0 {
+ i -= len(m.Hash)
+ copy(dAtA[i:], m.Hash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Hash)))
+ i--
+ dAtA[i] = 0x22
}
- l = len(m.AppStateBytes)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ if len(m.ByzantineValidators) > 0 {
+ for iNdEx := len(m.ByzantineValidators) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.ByzantineValidators[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x1a
+ }
}
- if m.InitialHeight != 0 {
- n += 1 + sovTypes(uint64(m.InitialHeight))
+ {
+ size, err := m.ProposedLastCommit.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
}
- if m.InitialCoreHeight != 0 {
- n += 1 + sovTypes(uint64(m.InitialCoreHeight))
+ i--
+ dAtA[i] = 0x12
+ if len(m.Txs) > 0 {
+ for iNdEx := len(m.Txs) - 1; iNdEx >= 0; iNdEx-- {
+ i -= len(m.Txs[iNdEx])
+ copy(dAtA[i:], m.Txs[iNdEx])
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Txs[iNdEx])))
+ i--
+ dAtA[i] = 0xa
+ }
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *RequestQuery) Size() (n int) {
- if m == nil {
- return 0
+func (m *RequestExtendVote) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
}
+ return dAtA[:n], nil
+}
+
+func (m *RequestExtendVote) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *RequestExtendVote) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
var l int
_ = l
- l = len(m.Data)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
- }
- l = len(m.Path)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
- }
if m.Height != 0 {
- n += 1 + sovTypes(uint64(m.Height))
+ i = encodeVarintTypes(dAtA, i, uint64(m.Height))
+ i--
+ dAtA[i] = 0x10
}
- if m.Prove {
- n += 2
+ if len(m.Hash) > 0 {
+ i -= len(m.Hash)
+ copy(dAtA[i:], m.Hash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Hash)))
+ i--
+ dAtA[i] = 0xa
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *RequestBeginBlock) Size() (n int) {
- if m == nil {
- return 0
+func (m *RequestVerifyVoteExtension) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
}
+ return dAtA[:n], nil
+}
+
+func (m *RequestVerifyVoteExtension) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *RequestVerifyVoteExtension) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
var l int
_ = l
- l = len(m.Hash)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ if len(m.VoteExtension) > 0 {
+ i -= len(m.VoteExtension)
+ copy(dAtA[i:], m.VoteExtension)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.VoteExtension)))
+ i--
+ dAtA[i] = 0x22
}
- l = m.Header.Size()
- n += 1 + l + sovTypes(uint64(l))
- l = m.LastCommitInfo.Size()
- n += 1 + l + sovTypes(uint64(l))
- if len(m.ByzantineValidators) > 0 {
- for _, e := range m.ByzantineValidators {
- l = e.Size()
- n += 1 + l + sovTypes(uint64(l))
- }
- }
- return n
-}
-
-func (m *RequestCheckTx) Size() (n int) {
- if m == nil {
- return 0
+ if m.Height != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Height))
+ i--
+ dAtA[i] = 0x18
}
- var l int
- _ = l
- l = len(m.Tx)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ if len(m.ValidatorProTxHash) > 0 {
+ i -= len(m.ValidatorProTxHash)
+ copy(dAtA[i:], m.ValidatorProTxHash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.ValidatorProTxHash)))
+ i--
+ dAtA[i] = 0x12
}
- if m.Type != 0 {
- n += 1 + sovTypes(uint64(m.Type))
+ if len(m.Hash) > 0 {
+ i -= len(m.Hash)
+ copy(dAtA[i:], m.Hash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Hash)))
+ i--
+ dAtA[i] = 0xa
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *RequestDeliverTx) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- l = len(m.Tx)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+func (m *RequestFinalizeBlock) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
}
- return n
+ return dAtA[:n], nil
}
-func (m *RequestEndBlock) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if m.Height != 0 {
- n += 1 + sovTypes(uint64(m.Height))
- }
- return n
+func (m *RequestFinalizeBlock) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *RequestCommit) Size() (n int) {
- if m == nil {
- return 0
- }
+func (m *RequestFinalizeBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
var l int
_ = l
- return n
-}
-
-func (m *RequestListSnapshots) Size() (n int) {
- if m == nil {
- return 0
+ if len(m.ProposerProTxHash) > 0 {
+ i -= len(m.ProposerProTxHash)
+ copy(dAtA[i:], m.ProposerProTxHash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.ProposerProTxHash)))
+ i--
+ dAtA[i] = 0x42
}
- var l int
- _ = l
- return n
-}
-
-func (m *RequestOfferSnapshot) Size() (n int) {
- if m == nil {
- return 0
+ if len(m.NextValidatorsHash) > 0 {
+ i -= len(m.NextValidatorsHash)
+ copy(dAtA[i:], m.NextValidatorsHash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.NextValidatorsHash)))
+ i--
+ dAtA[i] = 0x3a
}
- var l int
- _ = l
- if m.Snapshot != nil {
- l = m.Snapshot.Size()
- n += 1 + l + sovTypes(uint64(l))
+ n30, err30 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Time, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Time):])
+ if err30 != nil {
+ return 0, err30
}
- l = len(m.AppHash)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ i -= n30
+ i = encodeVarintTypes(dAtA, i, uint64(n30))
+ i--
+ dAtA[i] = 0x32
+ if m.Height != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Height))
+ i--
+ dAtA[i] = 0x28
}
- return n
-}
-
-func (m *RequestLoadSnapshotChunk) Size() (n int) {
- if m == nil {
- return 0
+ if len(m.Hash) > 0 {
+ i -= len(m.Hash)
+ copy(dAtA[i:], m.Hash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Hash)))
+ i--
+ dAtA[i] = 0x22
}
- var l int
- _ = l
- if m.Height != 0 {
- n += 1 + sovTypes(uint64(m.Height))
+ if len(m.ByzantineValidators) > 0 {
+ for iNdEx := len(m.ByzantineValidators) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.ByzantineValidators[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x1a
+ }
}
- if m.Format != 0 {
- n += 1 + sovTypes(uint64(m.Format))
+ {
+ size, err := m.DecidedLastCommit.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
}
- if m.Chunk != 0 {
- n += 1 + sovTypes(uint64(m.Chunk))
+ i--
+ dAtA[i] = 0x12
+ if len(m.Txs) > 0 {
+ for iNdEx := len(m.Txs) - 1; iNdEx >= 0; iNdEx-- {
+ i -= len(m.Txs[iNdEx])
+ copy(dAtA[i:], m.Txs[iNdEx])
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Txs[iNdEx])))
+ i--
+ dAtA[i] = 0xa
+ }
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *RequestApplySnapshotChunk) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if m.Index != 0 {
- n += 1 + sovTypes(uint64(m.Index))
- }
- l = len(m.Chunk)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
- }
- l = len(m.Sender)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+func (m *Response) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
}
- return n
+ return dAtA[:n], nil
}
-func (m *Response) Size() (n int) {
- if m == nil {
- return 0
- }
+func (m *Response) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Response) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
var l int
_ = l
if m.Value != nil {
- n += m.Value.Size()
+ {
+ size := m.Value.Size()
+ i -= size
+ if _, err := m.Value.MarshalTo(dAtA[i:]); err != nil {
+ return 0, err
+ }
+ }
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *Response_Exception) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
+func (m *Response_Exception) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Response_Exception) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
if m.Exception != nil {
- l = m.Exception.Size()
- n += 1 + l + sovTypes(uint64(l))
+ {
+ size, err := m.Exception.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0xa
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *Response_Echo) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
+func (m *Response_Echo) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Response_Echo) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
if m.Echo != nil {
- l = m.Echo.Size()
- n += 1 + l + sovTypes(uint64(l))
+ {
+ size, err := m.Echo.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x12
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *Response_Flush) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if m.Flush != nil {
- l = m.Flush.Size()
- n += 1 + l + sovTypes(uint64(l))
- }
- return n
+func (m *Response_Flush) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *Response_Info) Size() (n int) {
- if m == nil {
- return 0
+
+func (m *Response_Flush) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ if m.Flush != nil {
+ {
+ size, err := m.Flush.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x1a
}
- var l int
- _ = l
+ return len(dAtA) - i, nil
+}
+func (m *Response_Info) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Response_Info) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
if m.Info != nil {
- l = m.Info.Size()
- n += 1 + l + sovTypes(uint64(l))
+ {
+ size, err := m.Info.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x22
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *Response_InitChain) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
+func (m *Response_InitChain) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Response_InitChain) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
if m.InitChain != nil {
- l = m.InitChain.Size()
- n += 1 + l + sovTypes(uint64(l))
+ {
+ size, err := m.InitChain.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x2a
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *Response_Query) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
+func (m *Response_Query) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Response_Query) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
if m.Query != nil {
- l = m.Query.Size()
- n += 1 + l + sovTypes(uint64(l))
+ {
+ size, err := m.Query.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x32
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *Response_BeginBlock) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
+func (m *Response_BeginBlock) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Response_BeginBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
if m.BeginBlock != nil {
- l = m.BeginBlock.Size()
- n += 1 + l + sovTypes(uint64(l))
+ {
+ size, err := m.BeginBlock.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x3a
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *Response_CheckTx) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
+func (m *Response_CheckTx) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Response_CheckTx) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
if m.CheckTx != nil {
- l = m.CheckTx.Size()
- n += 1 + l + sovTypes(uint64(l))
+ {
+ size, err := m.CheckTx.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x42
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *Response_DeliverTx) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
+func (m *Response_DeliverTx) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Response_DeliverTx) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
if m.DeliverTx != nil {
- l = m.DeliverTx.Size()
- n += 1 + l + sovTypes(uint64(l))
+ {
+ size, err := m.DeliverTx.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x4a
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *Response_EndBlock) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
+func (m *Response_EndBlock) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Response_EndBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
if m.EndBlock != nil {
- l = m.EndBlock.Size()
- n += 1 + l + sovTypes(uint64(l))
+ {
+ size, err := m.EndBlock.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x52
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *Response_Commit) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
+func (m *Response_Commit) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Response_Commit) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
if m.Commit != nil {
- l = m.Commit.Size()
- n += 1 + l + sovTypes(uint64(l))
+ {
+ size, err := m.Commit.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x5a
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *Response_ListSnapshots) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
+func (m *Response_ListSnapshots) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Response_ListSnapshots) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
if m.ListSnapshots != nil {
- l = m.ListSnapshots.Size()
- n += 1 + l + sovTypes(uint64(l))
+ {
+ size, err := m.ListSnapshots.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x62
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *Response_OfferSnapshot) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
+func (m *Response_OfferSnapshot) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Response_OfferSnapshot) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
if m.OfferSnapshot != nil {
- l = m.OfferSnapshot.Size()
- n += 1 + l + sovTypes(uint64(l))
+ {
+ size, err := m.OfferSnapshot.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x6a
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *Response_LoadSnapshotChunk) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
+func (m *Response_LoadSnapshotChunk) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Response_LoadSnapshotChunk) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
if m.LoadSnapshotChunk != nil {
- l = m.LoadSnapshotChunk.Size()
- n += 1 + l + sovTypes(uint64(l))
+ {
+ size, err := m.LoadSnapshotChunk.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x72
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *Response_ApplySnapshotChunk) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
+func (m *Response_ApplySnapshotChunk) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Response_ApplySnapshotChunk) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
if m.ApplySnapshotChunk != nil {
- l = m.ApplySnapshotChunk.Size()
- n += 1 + l + sovTypes(uint64(l))
+ {
+ size, err := m.ApplySnapshotChunk.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x7a
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *ResponseException) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- l = len(m.Error)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
- }
- return n
+func (m *Response_PrepareProposal) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *ResponseEcho) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- l = len(m.Message)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+func (m *Response_PrepareProposal) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ if m.PrepareProposal != nil {
+ {
+ size, err := m.PrepareProposal.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0x82
}
- return n
+ return len(dAtA) - i, nil
+}
+func (m *Response_ProcessProposal) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *ResponseFlush) Size() (n int) {
- if m == nil {
- return 0
+func (m *Response_ProcessProposal) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ if m.ProcessProposal != nil {
+ {
+ size, err := m.ProcessProposal.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0x8a
}
- var l int
- _ = l
- return n
+ return len(dAtA) - i, nil
+}
+func (m *Response_ExtendVote) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *ResponseInfo) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- l = len(m.Data)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
- }
- l = len(m.Version)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+func (m *Response_ExtendVote) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ if m.ExtendVote != nil {
+ {
+ size, err := m.ExtendVote.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0x92
}
- if m.AppVersion != 0 {
- n += 1 + sovTypes(uint64(m.AppVersion))
+ return len(dAtA) - i, nil
+}
+func (m *Response_VerifyVoteExtension) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Response_VerifyVoteExtension) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ if m.VerifyVoteExtension != nil {
+ {
+ size, err := m.VerifyVoteExtension.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0x9a
}
- if m.LastBlockHeight != 0 {
- n += 1 + sovTypes(uint64(m.LastBlockHeight))
+ return len(dAtA) - i, nil
+}
+func (m *Response_FinalizeBlock) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Response_FinalizeBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ if m.FinalizeBlock != nil {
+ {
+ size, err := m.FinalizeBlock.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xa2
}
- l = len(m.LastBlockAppHash)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ return len(dAtA) - i, nil
+}
+func (m *ResponseException) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
}
- return n
+ return dAtA[:n], nil
}
-func (m *ResponseInitChain) Size() (n int) {
- if m == nil {
- return 0
- }
+func (m *ResponseException) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ResponseException) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
var l int
_ = l
- if m.ConsensusParams != nil {
- l = m.ConsensusParams.Size()
- n += 1 + l + sovTypes(uint64(l))
- }
- l = len(m.AppHash)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
- }
- l = m.ValidatorSetUpdate.Size()
- n += 2 + l + sovTypes(uint64(l))
- if m.NextCoreChainLockUpdate != nil {
- l = m.NextCoreChainLockUpdate.Size()
- n += 2 + l + sovTypes(uint64(l))
- }
- if m.InitialCoreHeight != 0 {
- n += 2 + sovTypes(uint64(m.InitialCoreHeight))
+ if len(m.Error) > 0 {
+ i -= len(m.Error)
+ copy(dAtA[i:], m.Error)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Error)))
+ i--
+ dAtA[i] = 0xa
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *ResponseQuery) Size() (n int) {
- if m == nil {
- return 0
+func (m *ResponseEcho) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
}
+ return dAtA[:n], nil
+}
+
+func (m *ResponseEcho) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ResponseEcho) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
var l int
_ = l
- if m.Code != 0 {
- n += 1 + sovTypes(uint64(m.Code))
- }
- l = len(m.Log)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
- }
- l = len(m.Info)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
- }
- if m.Index != 0 {
- n += 1 + sovTypes(uint64(m.Index))
- }
- l = len(m.Key)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
- }
- l = len(m.Value)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
- }
- if m.ProofOps != nil {
- l = m.ProofOps.Size()
- n += 1 + l + sovTypes(uint64(l))
- }
- if m.Height != 0 {
- n += 1 + sovTypes(uint64(m.Height))
- }
- l = len(m.Codespace)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ if len(m.Message) > 0 {
+ i -= len(m.Message)
+ copy(dAtA[i:], m.Message)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Message)))
+ i--
+ dAtA[i] = 0xa
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *ResponseBeginBlock) Size() (n int) {
- if m == nil {
- return 0
+func (m *ResponseFlush) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
}
+ return dAtA[:n], nil
+}
+
+func (m *ResponseFlush) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ResponseFlush) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
var l int
_ = l
- if len(m.Events) > 0 {
- for _, e := range m.Events {
- l = e.Size()
- n += 1 + l + sovTypes(uint64(l))
- }
- }
- return n
+ return len(dAtA) - i, nil
}
-func (m *ResponseCheckTx) Size() (n int) {
- if m == nil {
- return 0
+func (m *ResponseInfo) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
}
+ return dAtA[:n], nil
+}
+
+func (m *ResponseInfo) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ResponseInfo) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
var l int
_ = l
- if m.Code != 0 {
- n += 1 + sovTypes(uint64(m.Code))
+ if len(m.LastBlockAppHash) > 0 {
+ i -= len(m.LastBlockAppHash)
+ copy(dAtA[i:], m.LastBlockAppHash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.LastBlockAppHash)))
+ i--
+ dAtA[i] = 0x2a
}
- l = len(m.Data)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ if m.LastBlockHeight != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.LastBlockHeight))
+ i--
+ dAtA[i] = 0x20
}
- l = len(m.Log)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ if m.AppVersion != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.AppVersion))
+ i--
+ dAtA[i] = 0x18
}
- l = len(m.Info)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ if len(m.Version) > 0 {
+ i -= len(m.Version)
+ copy(dAtA[i:], m.Version)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Version)))
+ i--
+ dAtA[i] = 0x12
}
- if m.GasWanted != 0 {
- n += 1 + sovTypes(uint64(m.GasWanted))
+ if len(m.Data) > 0 {
+ i -= len(m.Data)
+ copy(dAtA[i:], m.Data)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Data)))
+ i--
+ dAtA[i] = 0xa
}
- if m.GasUsed != 0 {
- n += 1 + sovTypes(uint64(m.GasUsed))
+ return len(dAtA) - i, nil
+}
+
+func (m *ResponseInitChain) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
}
- if len(m.Events) > 0 {
- for _, e := range m.Events {
- l = e.Size()
- n += 1 + l + sovTypes(uint64(l))
- }
+ return dAtA[:n], nil
+}
+
+func (m *ResponseInitChain) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ResponseInitChain) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.InitialCoreHeight != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.InitialCoreHeight))
+ i--
+ dAtA[i] = 0x6
+ i--
+ dAtA[i] = 0xb0
}
- l = len(m.Codespace)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ if m.NextCoreChainLockUpdate != nil {
+ {
+ size, err := m.NextCoreChainLockUpdate.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x6
+ i--
+ dAtA[i] = 0xaa
}
- l = len(m.Sender)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ {
+ size, err := m.ValidatorSetUpdate.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
}
- if m.Priority != 0 {
- n += 1 + sovTypes(uint64(m.Priority))
+ i--
+ dAtA[i] = 0x6
+ i--
+ dAtA[i] = 0xa2
+ if len(m.AppHash) > 0 {
+ i -= len(m.AppHash)
+ copy(dAtA[i:], m.AppHash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.AppHash)))
+ i--
+ dAtA[i] = 0x1a
}
- l = len(m.MempoolError)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ if m.ConsensusParams != nil {
+ {
+ size, err := m.ConsensusParams.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0xa
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *ResponseDeliverTx) Size() (n int) {
- if m == nil {
- return 0
+func (m *ResponseQuery) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
}
+ return dAtA[:n], nil
+}
+
+func (m *ResponseQuery) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ResponseQuery) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
var l int
_ = l
- if m.Code != 0 {
- n += 1 + sovTypes(uint64(m.Code))
+ if len(m.Codespace) > 0 {
+ i -= len(m.Codespace)
+ copy(dAtA[i:], m.Codespace)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Codespace)))
+ i--
+ dAtA[i] = 0x52
}
- l = len(m.Data)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
- }
- l = len(m.Log)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
- }
- l = len(m.Info)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
- }
- if m.GasWanted != 0 {
- n += 1 + sovTypes(uint64(m.GasWanted))
- }
- if m.GasUsed != 0 {
- n += 1 + sovTypes(uint64(m.GasUsed))
+ if m.Height != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Height))
+ i--
+ dAtA[i] = 0x48
}
- if len(m.Events) > 0 {
- for _, e := range m.Events {
- l = e.Size()
- n += 1 + l + sovTypes(uint64(l))
+ if m.ProofOps != nil {
+ {
+ size, err := m.ProofOps.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
}
+ i--
+ dAtA[i] = 0x42
}
- l = len(m.Codespace)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ if len(m.Value) > 0 {
+ i -= len(m.Value)
+ copy(dAtA[i:], m.Value)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Value)))
+ i--
+ dAtA[i] = 0x3a
}
- return n
-}
-
-func (m *ResponseEndBlock) Size() (n int) {
- if m == nil {
- return 0
+ if len(m.Key) > 0 {
+ i -= len(m.Key)
+ copy(dAtA[i:], m.Key)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Key)))
+ i--
+ dAtA[i] = 0x32
}
- var l int
- _ = l
- if m.ConsensusParamUpdates != nil {
- l = m.ConsensusParamUpdates.Size()
- n += 1 + l + sovTypes(uint64(l))
+ if m.Index != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Index))
+ i--
+ dAtA[i] = 0x28
}
- if len(m.Events) > 0 {
- for _, e := range m.Events {
- l = e.Size()
- n += 1 + l + sovTypes(uint64(l))
- }
+ if len(m.Info) > 0 {
+ i -= len(m.Info)
+ copy(dAtA[i:], m.Info)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Info)))
+ i--
+ dAtA[i] = 0x22
}
- if m.NextCoreChainLockUpdate != nil {
- l = m.NextCoreChainLockUpdate.Size()
- n += 2 + l + sovTypes(uint64(l))
+ if len(m.Log) > 0 {
+ i -= len(m.Log)
+ copy(dAtA[i:], m.Log)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Log)))
+ i--
+ dAtA[i] = 0x1a
}
- if m.ValidatorSetUpdate != nil {
- l = m.ValidatorSetUpdate.Size()
- n += 2 + l + sovTypes(uint64(l))
+ if m.Code != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Code))
+ i--
+ dAtA[i] = 0x8
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *ResponseCommit) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- l = len(m.Data)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
- }
- if m.RetainHeight != 0 {
- n += 1 + sovTypes(uint64(m.RetainHeight))
+func (m *ResponseBeginBlock) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
}
- return n
+ return dAtA[:n], nil
}
-func (m *ResponseListSnapshots) Size() (n int) {
- if m == nil {
- return 0
- }
+func (m *ResponseBeginBlock) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ResponseBeginBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
var l int
_ = l
- if len(m.Snapshots) > 0 {
- for _, e := range m.Snapshots {
- l = e.Size()
- n += 1 + l + sovTypes(uint64(l))
+ if len(m.Events) > 0 {
+ for iNdEx := len(m.Events) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.Events[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0xa
}
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *ResponseOfferSnapshot) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if m.Result != 0 {
- n += 1 + sovTypes(uint64(m.Result))
+func (m *ResponseCheckTx) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
}
- return n
+ return dAtA[:n], nil
}
-func (m *ResponseLoadSnapshotChunk) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- l = len(m.Chunk)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
- }
- return n
+func (m *ResponseCheckTx) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *ResponseApplySnapshotChunk) Size() (n int) {
- if m == nil {
- return 0
- }
+func (m *ResponseCheckTx) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
var l int
_ = l
- if m.Result != 0 {
- n += 1 + sovTypes(uint64(m.Result))
- }
- if len(m.RefetchChunks) > 0 {
- l = 0
- for _, e := range m.RefetchChunks {
- l += sovTypes(uint64(e))
- }
- n += 1 + sovTypes(uint64(l)) + l
+ if len(m.MempoolError) > 0 {
+ i -= len(m.MempoolError)
+ copy(dAtA[i:], m.MempoolError)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.MempoolError)))
+ i--
+ dAtA[i] = 0x5a
}
- if len(m.RejectSenders) > 0 {
- for _, s := range m.RejectSenders {
- l = len(s)
- n += 1 + l + sovTypes(uint64(l))
- }
+ if m.Priority != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Priority))
+ i--
+ dAtA[i] = 0x50
}
- return n
-}
-
-func (m *LastCommitInfo) Size() (n int) {
- if m == nil {
- return 0
+ if len(m.Sender) > 0 {
+ i -= len(m.Sender)
+ copy(dAtA[i:], m.Sender)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Sender)))
+ i--
+ dAtA[i] = 0x4a
}
- var l int
- _ = l
- if m.Round != 0 {
- n += 1 + sovTypes(uint64(m.Round))
+ if len(m.Codespace) > 0 {
+ i -= len(m.Codespace)
+ copy(dAtA[i:], m.Codespace)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Codespace)))
+ i--
+ dAtA[i] = 0x42
}
- l = len(m.QuorumHash)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ if len(m.Events) > 0 {
+ for iNdEx := len(m.Events) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.Events[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x3a
+ }
}
- l = len(m.BlockSignature)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ if m.GasUsed != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.GasUsed))
+ i--
+ dAtA[i] = 0x30
}
- l = len(m.StateSignature)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ if m.GasWanted != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.GasWanted))
+ i--
+ dAtA[i] = 0x28
}
- return n
-}
-
-func (m *Event) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- l = len(m.Type)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
- }
- if len(m.Attributes) > 0 {
- for _, e := range m.Attributes {
- l = e.Size()
- n += 1 + l + sovTypes(uint64(l))
- }
- }
- return n
-}
-
-func (m *EventAttribute) Size() (n int) {
- if m == nil {
- return 0
+ if len(m.Info) > 0 {
+ i -= len(m.Info)
+ copy(dAtA[i:], m.Info)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Info)))
+ i--
+ dAtA[i] = 0x22
}
- var l int
- _ = l
- l = len(m.Key)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ if len(m.Log) > 0 {
+ i -= len(m.Log)
+ copy(dAtA[i:], m.Log)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Log)))
+ i--
+ dAtA[i] = 0x1a
}
- l = len(m.Value)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ if len(m.Data) > 0 {
+ i -= len(m.Data)
+ copy(dAtA[i:], m.Data)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Data)))
+ i--
+ dAtA[i] = 0x12
}
- if m.Index {
- n += 2
+ if m.Code != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Code))
+ i--
+ dAtA[i] = 0x8
}
- return n
+ return len(dAtA) - i, nil
}
-func (m *TxResult) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if m.Height != 0 {
- n += 1 + sovTypes(uint64(m.Height))
- }
- if m.Index != 0 {
- n += 1 + sovTypes(uint64(m.Index))
- }
- l = len(m.Tx)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+func (m *ResponseDeliverTx) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
}
- l = m.Result.Size()
- n += 1 + l + sovTypes(uint64(l))
- return n
+ return dAtA[:n], nil
}
-func (m *Validator) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if m.Power != 0 {
- n += 1 + sovTypes(uint64(m.Power))
- }
- l = len(m.ProTxHash)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
- }
- return n
+func (m *ResponseDeliverTx) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *ValidatorUpdate) Size() (n int) {
- if m == nil {
- return 0
- }
+func (m *ResponseDeliverTx) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
var l int
_ = l
- if m.PubKey != nil {
- l = m.PubKey.Size()
- n += 1 + l + sovTypes(uint64(l))
+ if len(m.Codespace) > 0 {
+ i -= len(m.Codespace)
+ copy(dAtA[i:], m.Codespace)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Codespace)))
+ i--
+ dAtA[i] = 0x42
}
- if m.Power != 0 {
- n += 1 + sovTypes(uint64(m.Power))
+ if len(m.Events) > 0 {
+ for iNdEx := len(m.Events) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.Events[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x3a
+ }
}
- l = len(m.ProTxHash)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ if m.GasUsed != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.GasUsed))
+ i--
+ dAtA[i] = 0x30
}
- l = len(m.NodeAddress)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ if m.GasWanted != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.GasWanted))
+ i--
+ dAtA[i] = 0x28
}
- return n
-}
-
-func (m *ValidatorSetUpdate) Size() (n int) {
- if m == nil {
- return 0
+ if len(m.Info) > 0 {
+ i -= len(m.Info)
+ copy(dAtA[i:], m.Info)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Info)))
+ i--
+ dAtA[i] = 0x22
}
- var l int
- _ = l
- if len(m.ValidatorUpdates) > 0 {
- for _, e := range m.ValidatorUpdates {
- l = e.Size()
- n += 1 + l + sovTypes(uint64(l))
- }
+ if len(m.Log) > 0 {
+ i -= len(m.Log)
+ copy(dAtA[i:], m.Log)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Log)))
+ i--
+ dAtA[i] = 0x1a
}
- l = m.ThresholdPublicKey.Size()
- n += 1 + l + sovTypes(uint64(l))
- l = len(m.QuorumHash)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+ if len(m.Data) > 0 {
+ i -= len(m.Data)
+ copy(dAtA[i:], m.Data)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Data)))
+ i--
+ dAtA[i] = 0x12
}
- return n
-}
-
-func (m *ThresholdPublicKeyUpdate) Size() (n int) {
- if m == nil {
- return 0
+ if m.Code != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Code))
+ i--
+ dAtA[i] = 0x8
}
- var l int
- _ = l
- l = m.ThresholdPublicKey.Size()
- n += 1 + l + sovTypes(uint64(l))
- return n
+ return len(dAtA) - i, nil
}
-func (m *QuorumHashUpdate) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- l = len(m.QuorumHash)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
+func (m *ResponseEndBlock) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
}
- return n
+ return dAtA[:n], nil
}
-func (m *VoteInfo) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- l = m.Validator.Size()
- n += 1 + l + sovTypes(uint64(l))
- if m.SignedLastBlock {
- n += 2
- }
- return n
+func (m *ResponseEndBlock) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
}
-func (m *Evidence) Size() (n int) {
- if m == nil {
- return 0
- }
+func (m *ResponseEndBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
var l int
_ = l
- if m.Type != 0 {
- n += 1 + sovTypes(uint64(m.Type))
- }
- l = m.Validator.Size()
- n += 1 + l + sovTypes(uint64(l))
- if m.Height != 0 {
- n += 1 + sovTypes(uint64(m.Height))
+ if m.ValidatorSetUpdate != nil {
+ {
+ size, err := m.ValidatorSetUpdate.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x6
+ i--
+ dAtA[i] = 0xaa
}
- l = github_com_gogo_protobuf_types.SizeOfStdTime(m.Time)
+ if m.NextCoreChainLockUpdate != nil {
+ {
+ size, err := m.NextCoreChainLockUpdate.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x6
+ i--
+ dAtA[i] = 0xa2
+ }
+ if len(m.Events) > 0 {
+ for iNdEx := len(m.Events) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.Events[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x1a
+ }
+ }
+ if m.ConsensusParamUpdates != nil {
+ {
+ size, err := m.ConsensusParamUpdates.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x12
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *ResponseCommit) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *ResponseCommit) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ResponseCommit) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.RetainHeight != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.RetainHeight))
+ i--
+ dAtA[i] = 0x18
+ }
+ if len(m.Data) > 0 {
+ i -= len(m.Data)
+ copy(dAtA[i:], m.Data)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Data)))
+ i--
+ dAtA[i] = 0x12
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *ResponseListSnapshots) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *ResponseListSnapshots) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ResponseListSnapshots) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if len(m.Snapshots) > 0 {
+ for iNdEx := len(m.Snapshots) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.Snapshots[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0xa
+ }
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *ResponseOfferSnapshot) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *ResponseOfferSnapshot) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ResponseOfferSnapshot) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.Result != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Result))
+ i--
+ dAtA[i] = 0x8
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *ResponseLoadSnapshotChunk) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *ResponseLoadSnapshotChunk) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ResponseLoadSnapshotChunk) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if len(m.Chunk) > 0 {
+ i -= len(m.Chunk)
+ copy(dAtA[i:], m.Chunk)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Chunk)))
+ i--
+ dAtA[i] = 0xa
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *ResponseApplySnapshotChunk) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *ResponseApplySnapshotChunk) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ResponseApplySnapshotChunk) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if len(m.RejectSenders) > 0 {
+ for iNdEx := len(m.RejectSenders) - 1; iNdEx >= 0; iNdEx-- {
+ i -= len(m.RejectSenders[iNdEx])
+ copy(dAtA[i:], m.RejectSenders[iNdEx])
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.RejectSenders[iNdEx])))
+ i--
+ dAtA[i] = 0x1a
+ }
+ }
+ if len(m.RefetchChunks) > 0 {
+ dAtA60 := make([]byte, len(m.RefetchChunks)*10)
+ var j59 int
+ for _, num := range m.RefetchChunks {
+ for num >= 1<<7 {
+ dAtA60[j59] = uint8(uint64(num)&0x7f | 0x80)
+ num >>= 7
+ j59++
+ }
+ dAtA60[j59] = uint8(num)
+ j59++
+ }
+ i -= j59
+ copy(dAtA[i:], dAtA60[:j59])
+ i = encodeVarintTypes(dAtA, i, uint64(j59))
+ i--
+ dAtA[i] = 0x12
+ }
+ if m.Result != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Result))
+ i--
+ dAtA[i] = 0x8
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *ResponsePrepareProposal) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *ResponsePrepareProposal) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ResponsePrepareProposal) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.ConsensusParamUpdates != nil {
+ {
+ size, err := m.ConsensusParamUpdates.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x2a
+ }
+ if len(m.ValidatorUpdates) > 0 {
+ for iNdEx := len(m.ValidatorUpdates) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.ValidatorUpdates[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x22
+ }
+ }
+ if len(m.TxResults) > 0 {
+ for iNdEx := len(m.TxResults) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.TxResults[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x1a
+ }
+ }
+ if len(m.AppHash) > 0 {
+ i -= len(m.AppHash)
+ copy(dAtA[i:], m.AppHash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.AppHash)))
+ i--
+ dAtA[i] = 0x12
+ }
+ if len(m.TxRecords) > 0 {
+ for iNdEx := len(m.TxRecords) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.TxRecords[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0xa
+ }
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *ResponseProcessProposal) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *ResponseProcessProposal) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ResponseProcessProposal) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.ConsensusParamUpdates != nil {
+ {
+ size, err := m.ConsensusParamUpdates.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x2a
+ }
+ if len(m.ValidatorUpdates) > 0 {
+ for iNdEx := len(m.ValidatorUpdates) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.ValidatorUpdates[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x22
+ }
+ }
+ if len(m.TxResults) > 0 {
+ for iNdEx := len(m.TxResults) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.TxResults[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x1a
+ }
+ }
+ if len(m.AppHash) > 0 {
+ i -= len(m.AppHash)
+ copy(dAtA[i:], m.AppHash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.AppHash)))
+ i--
+ dAtA[i] = 0x12
+ }
+ if m.Status != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Status))
+ i--
+ dAtA[i] = 0x8
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *ResponseExtendVote) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *ResponseExtendVote) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ResponseExtendVote) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if len(m.VoteExtension) > 0 {
+ i -= len(m.VoteExtension)
+ copy(dAtA[i:], m.VoteExtension)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.VoteExtension)))
+ i--
+ dAtA[i] = 0xa
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *ResponseVerifyVoteExtension) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *ResponseVerifyVoteExtension) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ResponseVerifyVoteExtension) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.Status != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Status))
+ i--
+ dAtA[i] = 0x8
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *ResponseFinalizeBlock) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *ResponseFinalizeBlock) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ResponseFinalizeBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.ValidatorSetUpdate != nil {
+ {
+ size, err := m.ValidatorSetUpdate.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x6
+ i--
+ dAtA[i] = 0xaa
+ }
+ if m.NextCoreChainLockUpdate != nil {
+ {
+ size, err := m.NextCoreChainLockUpdate.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x6
+ i--
+ dAtA[i] = 0xa2
+ }
+ if m.RetainHeight != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.RetainHeight))
+ i--
+ dAtA[i] = 0x30
+ }
+ if len(m.AppHash) > 0 {
+ i -= len(m.AppHash)
+ copy(dAtA[i:], m.AppHash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.AppHash)))
+ i--
+ dAtA[i] = 0x2a
+ }
+ if m.ConsensusParamUpdates != nil {
+ {
+ size, err := m.ConsensusParamUpdates.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x22
+ }
+ if len(m.TxResults) > 0 {
+ for iNdEx := len(m.TxResults) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.TxResults[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x12
+ }
+ }
+ if len(m.Events) > 0 {
+ for iNdEx := len(m.Events) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.Events[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0xa
+ }
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *CommitInfo) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *CommitInfo) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *CommitInfo) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if len(m.StateSignature) > 0 {
+ i -= len(m.StateSignature)
+ copy(dAtA[i:], m.StateSignature)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.StateSignature)))
+ i--
+ dAtA[i] = 0x2a
+ }
+ if len(m.BlockSignature) > 0 {
+ i -= len(m.BlockSignature)
+ copy(dAtA[i:], m.BlockSignature)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.BlockSignature)))
+ i--
+ dAtA[i] = 0x22
+ }
+ if len(m.QuorumHash) > 0 {
+ i -= len(m.QuorumHash)
+ copy(dAtA[i:], m.QuorumHash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.QuorumHash)))
+ i--
+ dAtA[i] = 0x1a
+ }
+ if m.Round != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Round))
+ i--
+ dAtA[i] = 0x8
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *ExtendedCommitInfo) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *ExtendedCommitInfo) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ExtendedCommitInfo) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if len(m.Votes) > 0 {
+ for iNdEx := len(m.Votes) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.Votes[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x12
+ }
+ }
+ if m.Round != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Round))
+ i--
+ dAtA[i] = 0x8
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *Event) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *Event) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Event) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if len(m.Attributes) > 0 {
+ for iNdEx := len(m.Attributes) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.Attributes[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x12
+ }
+ }
+ if len(m.Type) > 0 {
+ i -= len(m.Type)
+ copy(dAtA[i:], m.Type)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Type)))
+ i--
+ dAtA[i] = 0xa
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *EventAttribute) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *EventAttribute) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *EventAttribute) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.Index {
+ i--
+ if m.Index {
+ dAtA[i] = 1
+ } else {
+ dAtA[i] = 0
+ }
+ i--
+ dAtA[i] = 0x18
+ }
+ if len(m.Value) > 0 {
+ i -= len(m.Value)
+ copy(dAtA[i:], m.Value)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Value)))
+ i--
+ dAtA[i] = 0x12
+ }
+ if len(m.Key) > 0 {
+ i -= len(m.Key)
+ copy(dAtA[i:], m.Key)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Key)))
+ i--
+ dAtA[i] = 0xa
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *ExecTxResult) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *ExecTxResult) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ExecTxResult) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if len(m.Codespace) > 0 {
+ i -= len(m.Codespace)
+ copy(dAtA[i:], m.Codespace)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Codespace)))
+ i--
+ dAtA[i] = 0x42
+ }
+ if len(m.Events) > 0 {
+ for iNdEx := len(m.Events) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.Events[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x3a
+ }
+ }
+ if m.GasUsed != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.GasUsed))
+ i--
+ dAtA[i] = 0x30
+ }
+ if m.GasWanted != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.GasWanted))
+ i--
+ dAtA[i] = 0x28
+ }
+ if len(m.Info) > 0 {
+ i -= len(m.Info)
+ copy(dAtA[i:], m.Info)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Info)))
+ i--
+ dAtA[i] = 0x22
+ }
+ if len(m.Log) > 0 {
+ i -= len(m.Log)
+ copy(dAtA[i:], m.Log)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Log)))
+ i--
+ dAtA[i] = 0x1a
+ }
+ if len(m.Data) > 0 {
+ i -= len(m.Data)
+ copy(dAtA[i:], m.Data)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Data)))
+ i--
+ dAtA[i] = 0x12
+ }
+ if m.Code != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Code))
+ i--
+ dAtA[i] = 0x8
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *TxResult) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *TxResult) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *TxResult) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ {
+ size, err := m.Result.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x22
+ if len(m.Tx) > 0 {
+ i -= len(m.Tx)
+ copy(dAtA[i:], m.Tx)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Tx)))
+ i--
+ dAtA[i] = 0x1a
+ }
+ if m.Index != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Index))
+ i--
+ dAtA[i] = 0x10
+ }
+ if m.Height != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Height))
+ i--
+ dAtA[i] = 0x8
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *TxRecord) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *TxRecord) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *TxRecord) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if len(m.Tx) > 0 {
+ i -= len(m.Tx)
+ copy(dAtA[i:], m.Tx)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Tx)))
+ i--
+ dAtA[i] = 0x12
+ }
+ if m.Action != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Action))
+ i--
+ dAtA[i] = 0x8
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *Validator) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *Validator) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Validator) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if len(m.ProTxHash) > 0 {
+ i -= len(m.ProTxHash)
+ copy(dAtA[i:], m.ProTxHash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.ProTxHash)))
+ i--
+ dAtA[i] = 0x22
+ }
+ if m.Power != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Power))
+ i--
+ dAtA[i] = 0x18
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *ValidatorUpdate) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *ValidatorUpdate) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ValidatorUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if len(m.NodeAddress) > 0 {
+ i -= len(m.NodeAddress)
+ copy(dAtA[i:], m.NodeAddress)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.NodeAddress)))
+ i--
+ dAtA[i] = 0x22
+ }
+ if len(m.ProTxHash) > 0 {
+ i -= len(m.ProTxHash)
+ copy(dAtA[i:], m.ProTxHash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.ProTxHash)))
+ i--
+ dAtA[i] = 0x1a
+ }
+ if m.Power != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Power))
+ i--
+ dAtA[i] = 0x10
+ }
+ if m.PubKey != nil {
+ {
+ size, err := m.PubKey.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0xa
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *ValidatorSetUpdate) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *ValidatorSetUpdate) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ValidatorSetUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if len(m.QuorumHash) > 0 {
+ i -= len(m.QuorumHash)
+ copy(dAtA[i:], m.QuorumHash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.QuorumHash)))
+ i--
+ dAtA[i] = 0x1a
+ }
+ {
+ size, err := m.ThresholdPublicKey.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x12
+ if len(m.ValidatorUpdates) > 0 {
+ for iNdEx := len(m.ValidatorUpdates) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.ValidatorUpdates[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0xa
+ }
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *ThresholdPublicKeyUpdate) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *ThresholdPublicKeyUpdate) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ThresholdPublicKeyUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ {
+ size, err := m.ThresholdPublicKey.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0xa
+ return len(dAtA) - i, nil
+}
+
+func (m *QuorumHashUpdate) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *QuorumHashUpdate) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *QuorumHashUpdate) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if len(m.QuorumHash) > 0 {
+ i -= len(m.QuorumHash)
+ copy(dAtA[i:], m.QuorumHash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.QuorumHash)))
+ i--
+ dAtA[i] = 0xa
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *VoteInfo) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *VoteInfo) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *VoteInfo) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.SignedLastBlock {
+ i--
+ if m.SignedLastBlock {
+ dAtA[i] = 1
+ } else {
+ dAtA[i] = 0
+ }
+ i--
+ dAtA[i] = 0x10
+ }
+ {
+ size, err := m.Validator.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0xa
+ return len(dAtA) - i, nil
+}
+
+func (m *ExtendedVoteInfo) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *ExtendedVoteInfo) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *ExtendedVoteInfo) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if len(m.VoteExtension) > 0 {
+ i -= len(m.VoteExtension)
+ copy(dAtA[i:], m.VoteExtension)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.VoteExtension)))
+ i--
+ dAtA[i] = 0x1a
+ }
+ if m.SignedLastBlock {
+ i--
+ if m.SignedLastBlock {
+ dAtA[i] = 1
+ } else {
+ dAtA[i] = 0
+ }
+ i--
+ dAtA[i] = 0x10
+ }
+ {
+ size, err := m.Validator.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0xa
+ return len(dAtA) - i, nil
+}
+
+func (m *Misbehavior) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *Misbehavior) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Misbehavior) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.TotalVotingPower != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.TotalVotingPower))
+ i--
+ dAtA[i] = 0x28
+ }
+ n72, err72 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Time, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Time):])
+ if err72 != nil {
+ return 0, err72
+ }
+ i -= n72
+ i = encodeVarintTypes(dAtA, i, uint64(n72))
+ i--
+ dAtA[i] = 0x22
+ if m.Height != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Height))
+ i--
+ dAtA[i] = 0x18
+ }
+ {
+ size, err := m.Validator.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintTypes(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x12
+ if m.Type != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Type))
+ i--
+ dAtA[i] = 0x8
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *Snapshot) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *Snapshot) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Snapshot) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.CoreChainLockedHeight != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.CoreChainLockedHeight))
+ i--
+ dAtA[i] = 0x6
+ i--
+ dAtA[i] = 0xa0
+ }
+ if len(m.Metadata) > 0 {
+ i -= len(m.Metadata)
+ copy(dAtA[i:], m.Metadata)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Metadata)))
+ i--
+ dAtA[i] = 0x2a
+ }
+ if len(m.Hash) > 0 {
+ i -= len(m.Hash)
+ copy(dAtA[i:], m.Hash)
+ i = encodeVarintTypes(dAtA, i, uint64(len(m.Hash)))
+ i--
+ dAtA[i] = 0x22
+ }
+ if m.Chunks != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Chunks))
+ i--
+ dAtA[i] = 0x18
+ }
+ if m.Format != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Format))
+ i--
+ dAtA[i] = 0x10
+ }
+ if m.Height != 0 {
+ i = encodeVarintTypes(dAtA, i, uint64(m.Height))
+ i--
+ dAtA[i] = 0x8
+ }
+ return len(dAtA) - i, nil
+}
+
+func encodeVarintTypes(dAtA []byte, offset int, v uint64) int {
+ offset -= sovTypes(v)
+ base := offset
+ for v >= 1<<7 {
+ dAtA[offset] = uint8(v&0x7f | 0x80)
+ v >>= 7
+ offset++
+ }
+ dAtA[offset] = uint8(v)
+ return base
+}
+func (m *Request) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Value != nil {
+ n += m.Value.Size()
+ }
+ return n
+}
+
+func (m *Request_Echo) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Echo != nil {
+ l = m.Echo.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Request_Flush) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Flush != nil {
+ l = m.Flush.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Request_Info) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Info != nil {
+ l = m.Info.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Request_InitChain) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.InitChain != nil {
+ l = m.InitChain.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Request_Query) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Query != nil {
+ l = m.Query.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Request_BeginBlock) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.BeginBlock != nil {
+ l = m.BeginBlock.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Request_CheckTx) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.CheckTx != nil {
+ l = m.CheckTx.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Request_DeliverTx) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.DeliverTx != nil {
+ l = m.DeliverTx.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Request_EndBlock) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.EndBlock != nil {
+ l = m.EndBlock.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Request_Commit) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Commit != nil {
+ l = m.Commit.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Request_ListSnapshots) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.ListSnapshots != nil {
+ l = m.ListSnapshots.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Request_OfferSnapshot) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.OfferSnapshot != nil {
+ l = m.OfferSnapshot.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Request_LoadSnapshotChunk) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.LoadSnapshotChunk != nil {
+ l = m.LoadSnapshotChunk.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Request_ApplySnapshotChunk) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.ApplySnapshotChunk != nil {
+ l = m.ApplySnapshotChunk.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Request_PrepareProposal) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.PrepareProposal != nil {
+ l = m.PrepareProposal.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Request_ProcessProposal) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.ProcessProposal != nil {
+ l = m.ProcessProposal.Size()
+ n += 2 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Request_ExtendVote) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.ExtendVote != nil {
+ l = m.ExtendVote.Size()
+ n += 2 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Request_VerifyVoteExtension) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.VerifyVoteExtension != nil {
+ l = m.VerifyVoteExtension.Size()
+ n += 2 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Request_FinalizeBlock) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.FinalizeBlock != nil {
+ l = m.FinalizeBlock.Size()
+ n += 2 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *RequestEcho) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.Message)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *RequestFlush) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ return n
+}
+
+func (m *RequestInfo) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.Version)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.BlockVersion != 0 {
+ n += 1 + sovTypes(uint64(m.BlockVersion))
+ }
+ if m.P2PVersion != 0 {
+ n += 1 + sovTypes(uint64(m.P2PVersion))
+ }
+ l = len(m.AbciVersion)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *RequestInitChain) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = github_com_gogo_protobuf_types.SizeOfStdTime(m.Time)
+ n += 1 + l + sovTypes(uint64(l))
+ l = len(m.ChainId)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.ConsensusParams != nil {
+ l = m.ConsensusParams.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.ValidatorSet != nil {
+ l = m.ValidatorSet.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.AppStateBytes)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.InitialHeight != 0 {
+ n += 1 + sovTypes(uint64(m.InitialHeight))
+ }
+ if m.InitialCoreHeight != 0 {
+ n += 1 + sovTypes(uint64(m.InitialCoreHeight))
+ }
+ return n
+}
+
+func (m *RequestQuery) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.Data)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.Path)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.Height != 0 {
+ n += 1 + sovTypes(uint64(m.Height))
+ }
+ if m.Prove {
+ n += 2
+ }
+ return n
+}
+
+func (m *RequestBeginBlock) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.Hash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = m.Header.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ l = m.LastCommitInfo.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ if len(m.ByzantineValidators) > 0 {
+ for _, e := range m.ByzantineValidators {
+ l = e.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ return n
+}
+
+func (m *RequestCheckTx) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.Tx)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.Type != 0 {
+ n += 1 + sovTypes(uint64(m.Type))
+ }
+ return n
+}
+
+func (m *RequestDeliverTx) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.Tx)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *RequestEndBlock) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Height != 0 {
+ n += 1 + sovTypes(uint64(m.Height))
+ }
+ return n
+}
+
+func (m *RequestCommit) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ return n
+}
+
+func (m *RequestListSnapshots) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ return n
+}
+
+func (m *RequestOfferSnapshot) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Snapshot != nil {
+ l = m.Snapshot.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.AppHash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *RequestLoadSnapshotChunk) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Height != 0 {
+ n += 1 + sovTypes(uint64(m.Height))
+ }
+ if m.Format != 0 {
+ n += 1 + sovTypes(uint64(m.Format))
+ }
+ if m.Chunk != 0 {
+ n += 1 + sovTypes(uint64(m.Chunk))
+ }
+ return n
+}
+
+func (m *RequestApplySnapshotChunk) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Index != 0 {
+ n += 1 + sovTypes(uint64(m.Index))
+ }
+ l = len(m.Chunk)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.Sender)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *RequestPrepareProposal) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.MaxTxBytes != 0 {
+ n += 1 + sovTypes(uint64(m.MaxTxBytes))
+ }
+ if len(m.Txs) > 0 {
+ for _, b := range m.Txs {
+ l = len(b)
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ l = m.LocalLastCommit.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ if len(m.ByzantineValidators) > 0 {
+ for _, e := range m.ByzantineValidators {
+ l = e.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ if m.Height != 0 {
+ n += 1 + sovTypes(uint64(m.Height))
+ }
+ l = github_com_gogo_protobuf_types.SizeOfStdTime(m.Time)
+ n += 1 + l + sovTypes(uint64(l))
+ l = len(m.NextValidatorsHash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.ProposerProTxHash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *RequestProcessProposal) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if len(m.Txs) > 0 {
+ for _, b := range m.Txs {
+ l = len(b)
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ l = m.ProposedLastCommit.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ if len(m.ByzantineValidators) > 0 {
+ for _, e := range m.ByzantineValidators {
+ l = e.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ l = len(m.Hash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.Height != 0 {
+ n += 1 + sovTypes(uint64(m.Height))
+ }
+ l = github_com_gogo_protobuf_types.SizeOfStdTime(m.Time)
+ n += 1 + l + sovTypes(uint64(l))
+ l = len(m.NextValidatorsHash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.ProposerProTxHash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *RequestExtendVote) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.Hash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.Height != 0 {
+ n += 1 + sovTypes(uint64(m.Height))
+ }
+ return n
+}
+
+func (m *RequestVerifyVoteExtension) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.Hash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.ValidatorProTxHash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.Height != 0 {
+ n += 1 + sovTypes(uint64(m.Height))
+ }
+ l = len(m.VoteExtension)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *RequestFinalizeBlock) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if len(m.Txs) > 0 {
+ for _, b := range m.Txs {
+ l = len(b)
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ l = m.DecidedLastCommit.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ if len(m.ByzantineValidators) > 0 {
+ for _, e := range m.ByzantineValidators {
+ l = e.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ l = len(m.Hash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.Height != 0 {
+ n += 1 + sovTypes(uint64(m.Height))
+ }
+ l = github_com_gogo_protobuf_types.SizeOfStdTime(m.Time)
+ n += 1 + l + sovTypes(uint64(l))
+ l = len(m.NextValidatorsHash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.ProposerProTxHash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *Response) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Value != nil {
+ n += m.Value.Size()
+ }
+ return n
+}
+
+func (m *Response_Exception) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Exception != nil {
+ l = m.Exception.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Response_Echo) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Echo != nil {
+ l = m.Echo.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Response_Flush) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Flush != nil {
+ l = m.Flush.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Response_Info) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Info != nil {
+ l = m.Info.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Response_InitChain) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.InitChain != nil {
+ l = m.InitChain.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Response_Query) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Query != nil {
+ l = m.Query.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Response_BeginBlock) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.BeginBlock != nil {
+ l = m.BeginBlock.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Response_CheckTx) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.CheckTx != nil {
+ l = m.CheckTx.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Response_DeliverTx) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.DeliverTx != nil {
+ l = m.DeliverTx.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Response_EndBlock) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.EndBlock != nil {
+ l = m.EndBlock.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Response_Commit) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Commit != nil {
+ l = m.Commit.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Response_ListSnapshots) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.ListSnapshots != nil {
+ l = m.ListSnapshots.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Response_OfferSnapshot) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.OfferSnapshot != nil {
+ l = m.OfferSnapshot.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Response_LoadSnapshotChunk) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.LoadSnapshotChunk != nil {
+ l = m.LoadSnapshotChunk.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Response_ApplySnapshotChunk) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.ApplySnapshotChunk != nil {
+ l = m.ApplySnapshotChunk.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Response_PrepareProposal) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.PrepareProposal != nil {
+ l = m.PrepareProposal.Size()
+ n += 2 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Response_ProcessProposal) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.ProcessProposal != nil {
+ l = m.ProcessProposal.Size()
+ n += 2 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Response_ExtendVote) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.ExtendVote != nil {
+ l = m.ExtendVote.Size()
+ n += 2 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Response_VerifyVoteExtension) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.VerifyVoteExtension != nil {
+ l = m.VerifyVoteExtension.Size()
+ n += 2 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *Response_FinalizeBlock) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.FinalizeBlock != nil {
+ l = m.FinalizeBlock.Size()
+ n += 2 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+func (m *ResponseException) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.Error)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *ResponseEcho) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.Message)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *ResponseFlush) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ return n
+}
+
+func (m *ResponseInfo) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.Data)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.Version)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.AppVersion != 0 {
+ n += 1 + sovTypes(uint64(m.AppVersion))
+ }
+ if m.LastBlockHeight != 0 {
+ n += 1 + sovTypes(uint64(m.LastBlockHeight))
+ }
+ l = len(m.LastBlockAppHash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *ResponseInitChain) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.ConsensusParams != nil {
+ l = m.ConsensusParams.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.AppHash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = m.ValidatorSetUpdate.Size()
+ n += 2 + l + sovTypes(uint64(l))
+ if m.NextCoreChainLockUpdate != nil {
+ l = m.NextCoreChainLockUpdate.Size()
+ n += 2 + l + sovTypes(uint64(l))
+ }
+ if m.InitialCoreHeight != 0 {
+ n += 2 + sovTypes(uint64(m.InitialCoreHeight))
+ }
+ return n
+}
+
+func (m *ResponseQuery) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Code != 0 {
+ n += 1 + sovTypes(uint64(m.Code))
+ }
+ l = len(m.Log)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.Info)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.Index != 0 {
+ n += 1 + sovTypes(uint64(m.Index))
+ }
+ l = len(m.Key)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.Value)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.ProofOps != nil {
+ l = m.ProofOps.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.Height != 0 {
+ n += 1 + sovTypes(uint64(m.Height))
+ }
+ l = len(m.Codespace)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *ResponseBeginBlock) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if len(m.Events) > 0 {
+ for _, e := range m.Events {
+ l = e.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ return n
+}
+
+func (m *ResponseCheckTx) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Code != 0 {
+ n += 1 + sovTypes(uint64(m.Code))
+ }
+ l = len(m.Data)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.Log)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.Info)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.GasWanted != 0 {
+ n += 1 + sovTypes(uint64(m.GasWanted))
+ }
+ if m.GasUsed != 0 {
+ n += 1 + sovTypes(uint64(m.GasUsed))
+ }
+ if len(m.Events) > 0 {
+ for _, e := range m.Events {
+ l = e.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ l = len(m.Codespace)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.Sender)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.Priority != 0 {
+ n += 1 + sovTypes(uint64(m.Priority))
+ }
+ l = len(m.MempoolError)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *ResponseDeliverTx) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Code != 0 {
+ n += 1 + sovTypes(uint64(m.Code))
+ }
+ l = len(m.Data)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.Log)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.Info)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.GasWanted != 0 {
+ n += 1 + sovTypes(uint64(m.GasWanted))
+ }
+ if m.GasUsed != 0 {
+ n += 1 + sovTypes(uint64(m.GasUsed))
+ }
+ if len(m.Events) > 0 {
+ for _, e := range m.Events {
+ l = e.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ l = len(m.Codespace)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *ResponseEndBlock) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.ConsensusParamUpdates != nil {
+ l = m.ConsensusParamUpdates.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if len(m.Events) > 0 {
+ for _, e := range m.Events {
+ l = e.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ if m.NextCoreChainLockUpdate != nil {
+ l = m.NextCoreChainLockUpdate.Size()
+ n += 2 + l + sovTypes(uint64(l))
+ }
+ if m.ValidatorSetUpdate != nil {
+ l = m.ValidatorSetUpdate.Size()
+ n += 2 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *ResponseCommit) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.Data)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.RetainHeight != 0 {
+ n += 1 + sovTypes(uint64(m.RetainHeight))
+ }
+ return n
+}
+
+func (m *ResponseListSnapshots) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if len(m.Snapshots) > 0 {
+ for _, e := range m.Snapshots {
+ l = e.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ return n
+}
+
+func (m *ResponseOfferSnapshot) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Result != 0 {
+ n += 1 + sovTypes(uint64(m.Result))
+ }
+ return n
+}
+
+func (m *ResponseLoadSnapshotChunk) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.Chunk)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *ResponseApplySnapshotChunk) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Result != 0 {
+ n += 1 + sovTypes(uint64(m.Result))
+ }
+ if len(m.RefetchChunks) > 0 {
+ l = 0
+ for _, e := range m.RefetchChunks {
+ l += sovTypes(uint64(e))
+ }
+ n += 1 + sovTypes(uint64(l)) + l
+ }
+ if len(m.RejectSenders) > 0 {
+ for _, s := range m.RejectSenders {
+ l = len(s)
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ return n
+}
+
+func (m *ResponsePrepareProposal) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if len(m.TxRecords) > 0 {
+ for _, e := range m.TxRecords {
+ l = e.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ l = len(m.AppHash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if len(m.TxResults) > 0 {
+ for _, e := range m.TxResults {
+ l = e.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ if len(m.ValidatorUpdates) > 0 {
+ for _, e := range m.ValidatorUpdates {
+ l = e.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ if m.ConsensusParamUpdates != nil {
+ l = m.ConsensusParamUpdates.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *ResponseProcessProposal) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Status != 0 {
+ n += 1 + sovTypes(uint64(m.Status))
+ }
+ l = len(m.AppHash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if len(m.TxResults) > 0 {
+ for _, e := range m.TxResults {
+ l = e.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ if len(m.ValidatorUpdates) > 0 {
+ for _, e := range m.ValidatorUpdates {
+ l = e.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ if m.ConsensusParamUpdates != nil {
+ l = m.ConsensusParamUpdates.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *ResponseExtendVote) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.VoteExtension)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *ResponseVerifyVoteExtension) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Status != 0 {
+ n += 1 + sovTypes(uint64(m.Status))
+ }
+ return n
+}
+
+func (m *ResponseFinalizeBlock) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if len(m.Events) > 0 {
+ for _, e := range m.Events {
+ l = e.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ if len(m.TxResults) > 0 {
+ for _, e := range m.TxResults {
+ l = e.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ if m.ConsensusParamUpdates != nil {
+ l = m.ConsensusParamUpdates.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.AppHash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.RetainHeight != 0 {
+ n += 1 + sovTypes(uint64(m.RetainHeight))
+ }
+ if m.NextCoreChainLockUpdate != nil {
+ l = m.NextCoreChainLockUpdate.Size()
+ n += 2 + l + sovTypes(uint64(l))
+ }
+ if m.ValidatorSetUpdate != nil {
+ l = m.ValidatorSetUpdate.Size()
+ n += 2 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *CommitInfo) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Round != 0 {
+ n += 1 + sovTypes(uint64(m.Round))
+ }
+ l = len(m.QuorumHash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.BlockSignature)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.StateSignature)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *ExtendedCommitInfo) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Round != 0 {
+ n += 1 + sovTypes(uint64(m.Round))
+ }
+ if len(m.Votes) > 0 {
+ for _, e := range m.Votes {
+ l = e.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ return n
+}
+
+func (m *Event) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.Type)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if len(m.Attributes) > 0 {
+ for _, e := range m.Attributes {
+ l = e.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ return n
+}
+
+func (m *EventAttribute) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.Key)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.Value)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.Index {
+ n += 2
+ }
+ return n
+}
+
+func (m *ExecTxResult) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Code != 0 {
+ n += 1 + sovTypes(uint64(m.Code))
+ }
+ l = len(m.Data)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.Log)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.Info)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.GasWanted != 0 {
+ n += 1 + sovTypes(uint64(m.GasWanted))
+ }
+ if m.GasUsed != 0 {
+ n += 1 + sovTypes(uint64(m.GasUsed))
+ }
+ if len(m.Events) > 0 {
+ for _, e := range m.Events {
+ l = e.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ l = len(m.Codespace)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *TxResult) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Height != 0 {
+ n += 1 + sovTypes(uint64(m.Height))
+ }
+ if m.Index != 0 {
+ n += 1 + sovTypes(uint64(m.Index))
+ }
+ l = len(m.Tx)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = m.Result.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ return n
+}
+
+func (m *TxRecord) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Action != 0 {
+ n += 1 + sovTypes(uint64(m.Action))
+ }
+ l = len(m.Tx)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *Validator) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Power != 0 {
+ n += 1 + sovTypes(uint64(m.Power))
+ }
+ l = len(m.ProTxHash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *ValidatorUpdate) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.PubKey != nil {
+ l = m.PubKey.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.Power != 0 {
+ n += 1 + sovTypes(uint64(m.Power))
+ }
+ l = len(m.ProTxHash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.NodeAddress)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *ValidatorSetUpdate) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if len(m.ValidatorUpdates) > 0 {
+ for _, e := range m.ValidatorUpdates {
+ l = e.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ }
+ l = m.ThresholdPublicKey.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ l = len(m.QuorumHash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *ThresholdPublicKeyUpdate) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = m.ThresholdPublicKey.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ return n
+}
+
+func (m *QuorumHashUpdate) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.QuorumHash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *VoteInfo) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = m.Validator.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ if m.SignedLastBlock {
+ n += 2
+ }
+ return n
+}
+
+func (m *ExtendedVoteInfo) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = m.Validator.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ if m.SignedLastBlock {
+ n += 2
+ }
+ l = len(m.VoteExtension)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ return n
+}
+
+func (m *Misbehavior) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Type != 0 {
+ n += 1 + sovTypes(uint64(m.Type))
+ }
+ l = m.Validator.Size()
+ n += 1 + l + sovTypes(uint64(l))
+ if m.Height != 0 {
+ n += 1 + sovTypes(uint64(m.Height))
+ }
+ l = github_com_gogo_protobuf_types.SizeOfStdTime(m.Time)
n += 1 + l + sovTypes(uint64(l))
if m.TotalVotingPower != 0 {
n += 1 + sovTypes(uint64(m.TotalVotingPower))
}
- return n
-}
+ return n
+}
+
+func (m *Snapshot) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Height != 0 {
+ n += 1 + sovTypes(uint64(m.Height))
+ }
+ if m.Format != 0 {
+ n += 1 + sovTypes(uint64(m.Format))
+ }
+ if m.Chunks != 0 {
+ n += 1 + sovTypes(uint64(m.Chunks))
+ }
+ l = len(m.Hash)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ l = len(m.Metadata)
+ if l > 0 {
+ n += 1 + l + sovTypes(uint64(l))
+ }
+ if m.CoreChainLockedHeight != 0 {
+ n += 2 + sovTypes(uint64(m.CoreChainLockedHeight))
+ }
+ return n
+}
+
+func sovTypes(x uint64) (n int) {
+ return (math_bits.Len64(x|1) + 6) / 7
+}
+func sozTypes(x uint64) (n int) {
+ return sovTypes(uint64((x << 1) ^ uint64((int64(x) >> 63))))
+}
+func (m *Request) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: Request: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: Request: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Echo", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ v := &RequestEcho{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Request_Echo{v}
+ iNdEx = postIndex
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Flush", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ v := &RequestFlush{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Request_Flush{v}
+ iNdEx = postIndex
+ case 3:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Info", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ v := &RequestInfo{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Request_Info{v}
+ iNdEx = postIndex
+ case 4:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field InitChain", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ v := &RequestInitChain{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Request_InitChain{v}
+ iNdEx = postIndex
+ case 5:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Query", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ v := &RequestQuery{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Request_Query{v}
+ iNdEx = postIndex
+ case 6:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field BeginBlock", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ v := &RequestBeginBlock{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Request_BeginBlock{v}
+ iNdEx = postIndex
+ case 7:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field CheckTx", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ v := &RequestCheckTx{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Request_CheckTx{v}
+ iNdEx = postIndex
+ case 8:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field DeliverTx", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ v := &RequestDeliverTx{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Request_DeliverTx{v}
+ iNdEx = postIndex
+ case 9:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field EndBlock", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ v := &RequestEndBlock{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Request_EndBlock{v}
+ iNdEx = postIndex
+ case 10:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Commit", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ v := &RequestCommit{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Request_Commit{v}
+ iNdEx = postIndex
+ case 11:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ListSnapshots", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ v := &RequestListSnapshots{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Request_ListSnapshots{v}
+ iNdEx = postIndex
+ case 12:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field OfferSnapshot", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ v := &RequestOfferSnapshot{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Request_OfferSnapshot{v}
+ iNdEx = postIndex
+ case 13:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field LoadSnapshotChunk", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ v := &RequestLoadSnapshotChunk{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Request_LoadSnapshotChunk{v}
+ iNdEx = postIndex
+ case 14:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ApplySnapshotChunk", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ v := &RequestApplySnapshotChunk{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Request_ApplySnapshotChunk{v}
+ iNdEx = postIndex
+ case 15:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field PrepareProposal", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ v := &RequestPrepareProposal{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Request_PrepareProposal{v}
+ iNdEx = postIndex
+ case 16:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ProcessProposal", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ v := &RequestProcessProposal{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Request_ProcessProposal{v}
+ iNdEx = postIndex
+ case 17:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ExtendVote", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ v := &RequestExtendVote{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Request_ExtendVote{v}
+ iNdEx = postIndex
+ case 18:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field VerifyVoteExtension", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ v := &RequestVerifyVoteExtension{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Request_VerifyVoteExtension{v}
+ iNdEx = postIndex
+ case 19:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field FinalizeBlock", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ v := &RequestFinalizeBlock{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Request_FinalizeBlock{v}
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *RequestEcho) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: RequestEcho: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: RequestEcho: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Message", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Message = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *RequestFlush) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: RequestFlush: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: RequestFlush: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *RequestInfo) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: RequestInfo: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: RequestInfo: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Version", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Version = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 2:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field BlockVersion", wireType)
+ }
+ m.BlockVersion = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.BlockVersion |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 3:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field P2PVersion", wireType)
+ }
+ m.P2PVersion = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.P2PVersion |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 4:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field AbciVersion", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.AbciVersion = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *RequestInitChain) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: RequestInitChain: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: RequestInitChain: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Time", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.Time, dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ChainId", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.ChainId = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 3:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ConsensusParams", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.ConsensusParams == nil {
+ m.ConsensusParams = &types1.ConsensusParams{}
+ }
+ if err := m.ConsensusParams.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 4:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ValidatorSet", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.ValidatorSet == nil {
+ m.ValidatorSet = &ValidatorSetUpdate{}
+ }
+ if err := m.ValidatorSet.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 5:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field AppStateBytes", wireType)
+ }
+ var byteLen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ byteLen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if byteLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + byteLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.AppStateBytes = append(m.AppStateBytes[:0], dAtA[iNdEx:postIndex]...)
+ if m.AppStateBytes == nil {
+ m.AppStateBytes = []byte{}
+ }
+ iNdEx = postIndex
+ case 6:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field InitialHeight", wireType)
+ }
+ m.InitialHeight = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.InitialHeight |= int64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 7:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field InitialCoreHeight", wireType)
+ }
+ m.InitialCoreHeight = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.InitialCoreHeight |= uint32(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *RequestQuery) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: RequestQuery: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: RequestQuery: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Data", wireType)
+ }
+ var byteLen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ byteLen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if byteLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + byteLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Data = append(m.Data[:0], dAtA[iNdEx:postIndex]...)
+ if m.Data == nil {
+ m.Data = []byte{}
+ }
+ iNdEx = postIndex
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Path = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 3:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Height", wireType)
+ }
+ m.Height = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Height |= int64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 4:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Prove", wireType)
+ }
+ var v int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ v |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ m.Prove = bool(v != 0)
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *RequestBeginBlock) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: RequestBeginBlock: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: RequestBeginBlock: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Hash", wireType)
+ }
+ var byteLen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ byteLen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if byteLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + byteLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Hash = append(m.Hash[:0], dAtA[iNdEx:postIndex]...)
+ if m.Hash == nil {
+ m.Hash = []byte{}
+ }
+ iNdEx = postIndex
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Header", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if err := m.Header.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 3:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field LastCommitInfo", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if err := m.LastCommitInfo.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 4:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ByzantineValidators", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.ByzantineValidators = append(m.ByzantineValidators, Misbehavior{})
+ if err := m.ByzantineValidators[len(m.ByzantineValidators)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *RequestCheckTx) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: RequestCheckTx: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: RequestCheckTx: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Tx", wireType)
+ }
+ var byteLen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ byteLen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if byteLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + byteLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Tx = append(m.Tx[:0], dAtA[iNdEx:postIndex]...)
+ if m.Tx == nil {
+ m.Tx = []byte{}
+ }
+ iNdEx = postIndex
+ case 2:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType)
+ }
+ m.Type = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Type |= CheckTxType(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *RequestDeliverTx) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: RequestDeliverTx: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: RequestDeliverTx: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Tx", wireType)
+ }
+ var byteLen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ byteLen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if byteLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + byteLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Tx = append(m.Tx[:0], dAtA[iNdEx:postIndex]...)
+ if m.Tx == nil {
+ m.Tx = []byte{}
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *RequestEndBlock) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: RequestEndBlock: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: RequestEndBlock: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Height", wireType)
+ }
+ m.Height = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Height |= int64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *RequestCommit) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: RequestCommit: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: RequestCommit: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *RequestListSnapshots) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: RequestListSnapshots: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: RequestListSnapshots: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *RequestOfferSnapshot) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: RequestOfferSnapshot: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: RequestOfferSnapshot: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Snapshot", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.Snapshot == nil {
+ m.Snapshot = &Snapshot{}
+ }
+ if err := m.Snapshot.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field AppHash", wireType)
+ }
+ var byteLen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ byteLen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if byteLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + byteLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.AppHash = append(m.AppHash[:0], dAtA[iNdEx:postIndex]...)
+ if m.AppHash == nil {
+ m.AppHash = []byte{}
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *RequestLoadSnapshotChunk) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: RequestLoadSnapshotChunk: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: RequestLoadSnapshotChunk: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Height", wireType)
+ }
+ m.Height = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Height |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 2:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Format", wireType)
+ }
+ m.Format = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Format |= uint32(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 3:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Chunk", wireType)
+ }
+ m.Chunk = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Chunk |= uint32(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *RequestApplySnapshotChunk) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: RequestApplySnapshotChunk: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: RequestApplySnapshotChunk: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Index", wireType)
+ }
+ m.Index = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Index |= uint32(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Chunk", wireType)
+ }
+ var byteLen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ byteLen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if byteLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + byteLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Chunk = append(m.Chunk[:0], dAtA[iNdEx:postIndex]...)
+ if m.Chunk == nil {
+ m.Chunk = []byte{}
+ }
+ iNdEx = postIndex
+ case 3:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Sender", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Sender = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *RequestPrepareProposal) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: RequestPrepareProposal: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: RequestPrepareProposal: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field MaxTxBytes", wireType)
+ }
+ m.MaxTxBytes = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.MaxTxBytes |= int64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Txs", wireType)
+ }
+ var byteLen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ byteLen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if byteLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + byteLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Txs = append(m.Txs, make([]byte, postIndex-iNdEx))
+ copy(m.Txs[len(m.Txs)-1], dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 3:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field LocalLastCommit", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if err := m.LocalLastCommit.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 4:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ByzantineValidators", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.ByzantineValidators = append(m.ByzantineValidators, Misbehavior{})
+ if err := m.ByzantineValidators[len(m.ByzantineValidators)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 5:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Height", wireType)
+ }
+ m.Height = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Height |= int64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 6:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Time", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.Time, dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 7:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field NextValidatorsHash", wireType)
+ }
+ var byteLen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ byteLen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if byteLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + byteLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.NextValidatorsHash = append(m.NextValidatorsHash[:0], dAtA[iNdEx:postIndex]...)
+ if m.NextValidatorsHash == nil {
+ m.NextValidatorsHash = []byte{}
+ }
+ iNdEx = postIndex
+ case 8:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ProposerProTxHash", wireType)
+ }
+ var byteLen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ byteLen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if byteLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + byteLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.ProposerProTxHash = append(m.ProposerProTxHash[:0], dAtA[iNdEx:postIndex]...)
+ if m.ProposerProTxHash == nil {
+ m.ProposerProTxHash = []byte{}
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
-func (m *Snapshot) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if m.Height != 0 {
- n += 1 + sovTypes(uint64(m.Height))
- }
- if m.Format != 0 {
- n += 1 + sovTypes(uint64(m.Format))
- }
- if m.Chunks != 0 {
- n += 1 + sovTypes(uint64(m.Chunks))
- }
- l = len(m.Hash)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
- }
- l = len(m.Metadata)
- if l > 0 {
- n += 1 + l + sovTypes(uint64(l))
- }
- if m.CoreChainLockedHeight != 0 {
- n += 2 + sovTypes(uint64(m.CoreChainLockedHeight))
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
}
- return n
-}
-
-func sovTypes(x uint64) (n int) {
- return (math_bits.Len64(x|1) + 6) / 7
-}
-func sozTypes(x uint64) (n int) {
- return sovTypes(uint64((x << 1) ^ uint64((int64(x) >> 63))))
+ return nil
}
-func (m *Request) Unmarshal(dAtA []byte) error {
+func (m *RequestProcessProposal) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -7707,17 +13320,17 @@ func (m *Request) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: Request: wiretype end group for non-group")
+ return fmt.Errorf("proto: RequestProcessProposal: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: Request: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: RequestProcessProposal: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Echo", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Txs", wireType)
}
- var msglen int
+ var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -7727,30 +13340,27 @@ func (m *Request) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
+ if byteLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + msglen
+ postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- v := &RequestEcho{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- m.Value = &Request_Echo{v}
+ m.Txs = append(m.Txs, make([]byte, postIndex-iNdEx))
+ copy(m.Txs[len(m.Txs)-1], dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 2:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Flush", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field ProposedLastCommit", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -7777,15 +13387,13 @@ func (m *Request) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- v := &RequestFlush{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ if err := m.ProposedLastCommit.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
- m.Value = &Request_Flush{v}
iNdEx = postIndex
case 3:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Info", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field ByzantineValidators", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -7812,17 +13420,16 @@ func (m *Request) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- v := &RequestInfo{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ m.ByzantineValidators = append(m.ByzantineValidators, Misbehavior{})
+ if err := m.ByzantineValidators[len(m.ByzantineValidators)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
- m.Value = &Request_Info{v}
iNdEx = postIndex
case 4:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field InitChain", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Hash", wireType)
}
- var msglen int
+ var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -7832,32 +13439,31 @@ func (m *Request) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
+ if byteLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + msglen
+ postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- v := &RequestInitChain{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
+ m.Hash = append(m.Hash[:0], dAtA[iNdEx:postIndex]...)
+ if m.Hash == nil {
+ m.Hash = []byte{}
}
- m.Value = &Request_InitChain{v}
iNdEx = postIndex
case 5:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Query", wireType)
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Height", wireType)
}
- var msglen int
+ m.Height = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -7867,30 +13473,14 @@ func (m *Request) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ m.Height |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
- return ErrInvalidLengthTypes
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthTypes
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- v := &RequestQuery{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- m.Value = &Request_Query{v}
- iNdEx = postIndex
case 6:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field BeginBlock", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Time", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -7917,17 +13507,15 @@ func (m *Request) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- v := &RequestBeginBlock{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.Time, dAtA[iNdEx:postIndex]); err != nil {
return err
}
- m.Value = &Request_BeginBlock{v}
iNdEx = postIndex
case 7:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field CheckTx", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field NextValidatorsHash", wireType)
}
- var msglen int
+ var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -7937,32 +13525,31 @@ func (m *Request) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
+ if byteLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + msglen
+ postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- v := &RequestCheckTx{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
+ m.NextValidatorsHash = append(m.NextValidatorsHash[:0], dAtA[iNdEx:postIndex]...)
+ if m.NextValidatorsHash == nil {
+ m.NextValidatorsHash = []byte{}
}
- m.Value = &Request_CheckTx{v}
iNdEx = postIndex
case 8:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field DeliverTx", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field ProposerProTxHash", wireType)
}
- var msglen int
+ var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -7972,32 +13559,81 @@ func (m *Request) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
+ if byteLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + msglen
+ postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- v := &RequestDeliverTx{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
+ m.ProposerProTxHash = append(m.ProposerProTxHash[:0], dAtA[iNdEx:postIndex]...)
+ if m.ProposerProTxHash == nil {
+ m.ProposerProTxHash = []byte{}
}
- m.Value = &Request_DeliverTx{v}
iNdEx = postIndex
- case 9:
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *RequestExtendVote) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: RequestExtendVote: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: RequestExtendVote: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field EndBlock", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Hash", wireType)
}
- var msglen int
+ var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -8007,32 +13643,31 @@ func (m *Request) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
+ if byteLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + msglen
+ postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- v := &RequestEndBlock{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
+ m.Hash = append(m.Hash[:0], dAtA[iNdEx:postIndex]...)
+ if m.Hash == nil {
+ m.Hash = []byte{}
}
- m.Value = &Request_EndBlock{v}
iNdEx = postIndex
- case 10:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Commit", wireType)
+ case 2:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Height", wireType)
}
- var msglen int
+ m.Height = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -8042,32 +13677,66 @@ func (m *Request) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ m.Height |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
- return ErrInvalidLengthTypes
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
}
- postIndex := iNdEx + msglen
- if postIndex < 0 {
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
- if postIndex > l {
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *RequestVerifyVoteExtension) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
return io.ErrUnexpectedEOF
}
- v := &RequestCommit{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
}
- m.Value = &Request_Commit{v}
- iNdEx = postIndex
- case 11:
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: RequestVerifyVoteExtension: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: RequestVerifyVoteExtension: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field ListSnapshots", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Hash", wireType)
}
- var msglen int
+ var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -8077,32 +13746,31 @@ func (m *Request) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
+ if byteLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + msglen
+ postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- v := &RequestListSnapshots{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
+ m.Hash = append(m.Hash[:0], dAtA[iNdEx:postIndex]...)
+ if m.Hash == nil {
+ m.Hash = []byte{}
}
- m.Value = &Request_ListSnapshots{v}
iNdEx = postIndex
- case 12:
+ case 2:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field OfferSnapshot", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field ValidatorProTxHash", wireType)
}
- var msglen int
+ var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -8112,32 +13780,31 @@ func (m *Request) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
+ if byteLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + msglen
+ postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- v := &RequestOfferSnapshot{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
+ m.ValidatorProTxHash = append(m.ValidatorProTxHash[:0], dAtA[iNdEx:postIndex]...)
+ if m.ValidatorProTxHash == nil {
+ m.ValidatorProTxHash = []byte{}
}
- m.Value = &Request_OfferSnapshot{v}
iNdEx = postIndex
- case 13:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field LoadSnapshotChunk", wireType)
+ case 3:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Height", wireType)
}
- var msglen int
+ m.Height = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -8147,32 +13814,16 @@ func (m *Request) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ m.Height |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
- return ErrInvalidLengthTypes
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthTypes
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- v := &RequestLoadSnapshotChunk{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- m.Value = &Request_LoadSnapshotChunk{v}
- iNdEx = postIndex
- case 14:
+ case 4:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field ApplySnapshotChunk", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field VoteExtension", wireType)
}
- var msglen int
+ var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -8182,26 +13833,25 @@ func (m *Request) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
+ if byteLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + msglen
+ postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- v := &RequestApplySnapshotChunk{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
+ m.VoteExtension = append(m.VoteExtension[:0], dAtA[iNdEx:postIndex]...)
+ if m.VoteExtension == nil {
+ m.VoteExtension = []byte{}
}
- m.Value = &Request_ApplySnapshotChunk{v}
iNdEx = postIndex
default:
iNdEx = preIndex
@@ -8224,7 +13874,7 @@ func (m *Request) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *RequestEcho) Unmarshal(dAtA []byte) error {
+func (m *RequestFinalizeBlock) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -8247,17 +13897,17 @@ func (m *RequestEcho) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: RequestEcho: wiretype end group for non-group")
+ return fmt.Errorf("proto: RequestFinalizeBlock: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: RequestEcho: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: RequestFinalizeBlock: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Message", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Txs", wireType)
}
- var stringLen uint64
+ var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -8267,129 +13917,96 @@ func (m *RequestEcho) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if byteLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Message = string(dAtA[iNdEx:postIndex])
+ m.Txs = append(m.Txs, make([]byte, postIndex-iNdEx))
+ copy(m.Txs[len(m.Txs)-1], dAtA[iNdEx:postIndex])
iNdEx = postIndex
- default:
- iNdEx = preIndex
- skippy, err := skipTypes(dAtA[iNdEx:])
- if err != nil {
- return err
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field DecidedLastCommit", wireType)
}
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return ErrInvalidLengthTypes
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
}
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
}
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *RequestFlush) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
}
- if iNdEx >= l {
+ if postIndex > l {
return io.ErrUnexpectedEOF
}
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: RequestFlush: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: RequestFlush: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- default:
- iNdEx = preIndex
- skippy, err := skipTypes(dAtA[iNdEx:])
- if err != nil {
+ if err := m.DecidedLastCommit.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return ErrInvalidLengthTypes
+ iNdEx = postIndex
+ case 3:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ByzantineValidators", wireType)
}
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
}
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *RequestInfo) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
}
- if iNdEx >= l {
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
return io.ErrUnexpectedEOF
}
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
+ m.ByzantineValidators = append(m.ByzantineValidators, Misbehavior{})
+ if err := m.ByzantineValidators[len(m.ByzantineValidators)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
}
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: RequestInfo: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: RequestInfo: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
+ iNdEx = postIndex
+ case 4:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Version", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Hash", wireType)
}
- var stringLen uint64
+ var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -8399,29 +14016,31 @@ func (m *RequestInfo) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if byteLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Version = string(dAtA[iNdEx:postIndex])
+ m.Hash = append(m.Hash[:0], dAtA[iNdEx:postIndex]...)
+ if m.Hash == nil {
+ m.Hash = []byte{}
+ }
iNdEx = postIndex
- case 2:
+ case 5:
if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field BlockVersion", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Height", wireType)
}
- m.BlockVersion = 0
+ m.Height = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -8431,16 +14050,16 @@ func (m *RequestInfo) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.BlockVersion |= uint64(b&0x7F) << shift
+ m.Height |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- case 3:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field P2PVersion", wireType)
+ case 6:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Time", wireType)
}
- m.P2PVersion = 0
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -8450,16 +14069,30 @@ func (m *RequestInfo) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.P2PVersion |= uint64(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- case 4:
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.Time, dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 7:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field AbciVersion", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field NextValidatorsHash", wireType)
}
- var stringLen uint64
+ var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -8469,23 +14102,59 @@ func (m *RequestInfo) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if byteLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.AbciVersion = string(dAtA[iNdEx:postIndex])
+ m.NextValidatorsHash = append(m.NextValidatorsHash[:0], dAtA[iNdEx:postIndex]...)
+ if m.NextValidatorsHash == nil {
+ m.NextValidatorsHash = []byte{}
+ }
+ iNdEx = postIndex
+ case 8:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ProposerProTxHash", wireType)
+ }
+ var byteLen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ byteLen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if byteLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + byteLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.ProposerProTxHash = append(m.ProposerProTxHash[:0], dAtA[iNdEx:postIndex]...)
+ if m.ProposerProTxHash == nil {
+ m.ProposerProTxHash = []byte{}
+ }
iNdEx = postIndex
default:
iNdEx = preIndex
@@ -8508,7 +14177,7 @@ func (m *RequestInfo) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *RequestInitChain) Unmarshal(dAtA []byte) error {
+func (m *Response) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -8531,15 +14200,15 @@ func (m *RequestInitChain) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: RequestInitChain: wiretype end group for non-group")
+ return fmt.Errorf("proto: Response: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: RequestInitChain: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: Response: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Time", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Exception", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -8566,15 +14235,17 @@ func (m *RequestInitChain) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.Time, dAtA[iNdEx:postIndex]); err != nil {
+ v := &ResponseException{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
+ m.Value = &Response_Exception{v}
iNdEx = postIndex
case 2:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field ChainId", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Echo", wireType)
}
- var stringLen uint64
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -8584,27 +14255,30 @@ func (m *RequestInitChain) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if msglen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.ChainId = string(dAtA[iNdEx:postIndex])
+ v := &ResponseEcho{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Response_Echo{v}
iNdEx = postIndex
case 3:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field ConsensusParams", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Flush", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -8631,16 +14305,15 @@ func (m *RequestInitChain) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- if m.ConsensusParams == nil {
- m.ConsensusParams = &types1.ConsensusParams{}
- }
- if err := m.ConsensusParams.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ v := &ResponseFlush{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
+ m.Value = &Response_Flush{v}
iNdEx = postIndex
case 4:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field ValidatorSet", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Info", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -8667,18 +14340,17 @@ func (m *RequestInitChain) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- if m.ValidatorSet == nil {
- m.ValidatorSet = &ValidatorSetUpdate{}
- }
- if err := m.ValidatorSet.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ v := &ResponseInfo{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
+ m.Value = &Response_Info{v}
iNdEx = postIndex
case 5:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field AppStateBytes", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field InitChain", wireType)
}
- var byteLen int
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -8688,31 +14360,32 @@ func (m *RequestInitChain) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- byteLen |= int(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if byteLen < 0 {
+ if msglen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + byteLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.AppStateBytes = append(m.AppStateBytes[:0], dAtA[iNdEx:postIndex]...)
- if m.AppStateBytes == nil {
- m.AppStateBytes = []byte{}
+ v := &ResponseInitChain{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
}
+ m.Value = &Response_InitChain{v}
iNdEx = postIndex
case 6:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field InitialHeight", wireType)
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Query", wireType)
}
- m.InitialHeight = 0
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -8722,16 +14395,32 @@ func (m *RequestInitChain) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.InitialHeight |= int64(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ v := &ResponseQuery{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Response_Query{v}
+ iNdEx = postIndex
case 7:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field InitialCoreHeight", wireType)
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field BeginBlock", wireType)
}
- m.InitialCoreHeight = 0
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -8741,66 +14430,67 @@ func (m *RequestInitChain) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.InitialCoreHeight |= uint32(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- default:
- iNdEx = preIndex
- skippy, err := skipTypes(dAtA[iNdEx:])
- if err != nil {
- return err
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
}
- if (skippy < 0) || (iNdEx+skippy) < 0 {
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
return ErrInvalidLengthTypes
}
- if (iNdEx + skippy) > l {
+ if postIndex > l {
return io.ErrUnexpectedEOF
}
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *RequestQuery) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
+ v := &ResponseBeginBlock{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
}
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
+ m.Value = &Response_BeginBlock{v}
+ iNdEx = postIndex
+ case 8:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field CheckTx", wireType)
}
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
}
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: RequestQuery: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: RequestQuery: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
+ v := &ResponseCheckTx{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Response_CheckTx{v}
+ iNdEx = postIndex
+ case 9:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Data", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field DeliverTx", wireType)
}
- var byteLen int
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -8810,31 +14500,32 @@ func (m *RequestQuery) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- byteLen |= int(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if byteLen < 0 {
+ if msglen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + byteLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Data = append(m.Data[:0], dAtA[iNdEx:postIndex]...)
- if m.Data == nil {
- m.Data = []byte{}
+ v := &ResponseDeliverTx{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
}
+ m.Value = &Response_DeliverTx{v}
iNdEx = postIndex
- case 2:
+ case 10:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field EndBlock", wireType)
}
- var stringLen uint64
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -8844,29 +14535,32 @@ func (m *RequestQuery) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if msglen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Path = string(dAtA[iNdEx:postIndex])
+ v := &ResponseEndBlock{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Response_EndBlock{v}
iNdEx = postIndex
- case 3:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Height", wireType)
+ case 11:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Commit", wireType)
}
- m.Height = 0
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -8876,16 +14570,32 @@ func (m *RequestQuery) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.Height |= int64(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- case 4:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Prove", wireType)
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
}
- var v int
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ v := &ResponseCommit{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ m.Value = &Response_Commit{v}
+ iNdEx = postIndex
+ case 12:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ListSnapshots", wireType)
+ }
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -8895,67 +14605,32 @@ func (m *RequestQuery) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- v |= int(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- m.Prove = bool(v != 0)
- default:
- iNdEx = preIndex
- skippy, err := skipTypes(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
+ if msglen < 0 {
return ErrInvalidLengthTypes
}
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *RequestBeginBlock) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
}
- if iNdEx >= l {
+ if postIndex > l {
return io.ErrUnexpectedEOF
}
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
+ v := &ResponseListSnapshots{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
}
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: RequestBeginBlock: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: RequestBeginBlock: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
+ m.Value = &Response_ListSnapshots{v}
+ iNdEx = postIndex
+ case 13:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Hash", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field OfferSnapshot", wireType)
}
- var byteLen int
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -8965,29 +14640,30 @@ func (m *RequestBeginBlock) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- byteLen |= int(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if byteLen < 0 {
+ if msglen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + byteLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Hash = append(m.Hash[:0], dAtA[iNdEx:postIndex]...)
- if m.Hash == nil {
- m.Hash = []byte{}
+ v := &ResponseOfferSnapshot{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
}
+ m.Value = &Response_OfferSnapshot{v}
iNdEx = postIndex
- case 2:
+ case 14:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Header", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field LoadSnapshotChunk", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -9014,13 +14690,15 @@ func (m *RequestBeginBlock) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- if err := m.Header.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ v := &ResponseLoadSnapshotChunk{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
+ m.Value = &Response_LoadSnapshotChunk{v}
iNdEx = postIndex
- case 3:
+ case 15:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field LastCommitInfo", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field ApplySnapshotChunk", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -9047,13 +14725,15 @@ func (m *RequestBeginBlock) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- if err := m.LastCommitInfo.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ v := &ResponseApplySnapshotChunk{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
+ m.Value = &Response_ApplySnapshotChunk{v}
iNdEx = postIndex
- case 4:
+ case 16:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field ByzantineValidators", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field PrepareProposal", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -9080,66 +14760,52 @@ func (m *RequestBeginBlock) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.ByzantineValidators = append(m.ByzantineValidators, Evidence{})
- if err := m.ByzantineValidators[len(m.ByzantineValidators)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ v := &ResponsePrepareProposal{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
+ m.Value = &Response_PrepareProposal{v}
iNdEx = postIndex
- default:
- iNdEx = preIndex
- skippy, err := skipTypes(dAtA[iNdEx:])
- if err != nil {
- return err
+ case 17:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ProcessProposal", wireType)
}
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return ErrInvalidLengthTypes
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
}
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
}
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *RequestCheckTx) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
}
- if iNdEx >= l {
+ if postIndex > l {
return io.ErrUnexpectedEOF
}
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
+ v := &ResponseProcessProposal{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
}
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: RequestCheckTx: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: RequestCheckTx: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
+ m.Value = &Response_ProcessProposal{v}
+ iNdEx = postIndex
+ case 18:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Tx", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field ExtendVote", wireType)
}
- var byteLen int
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -9149,31 +14815,32 @@ func (m *RequestCheckTx) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- byteLen |= int(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if byteLen < 0 {
+ if msglen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + byteLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Tx = append(m.Tx[:0], dAtA[iNdEx:postIndex]...)
- if m.Tx == nil {
- m.Tx = []byte{}
+ v := &ResponseExtendVote{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
}
+ m.Value = &Response_ExtendVote{v}
iNdEx = postIndex
- case 2:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType)
+ case 19:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field VerifyVoteExtension", wireType)
}
- m.Type = 0
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -9183,66 +14850,32 @@ func (m *RequestCheckTx) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.Type |= CheckTxType(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- default:
- iNdEx = preIndex
- skippy, err := skipTypes(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
+ if msglen < 0 {
return ErrInvalidLengthTypes
}
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *RequestDeliverTx) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
}
- if iNdEx >= l {
+ if postIndex > l {
return io.ErrUnexpectedEOF
}
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
+ v := &ResponseVerifyVoteExtension{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
}
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: RequestDeliverTx: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: RequestDeliverTx: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
+ m.Value = &Response_VerifyVoteExtension{v}
+ iNdEx = postIndex
+ case 20:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Tx", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field FinalizeBlock", wireType)
}
- var byteLen int
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -9252,25 +14885,26 @@ func (m *RequestDeliverTx) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- byteLen |= int(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if byteLen < 0 {
+ if msglen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + byteLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Tx = append(m.Tx[:0], dAtA[iNdEx:postIndex]...)
- if m.Tx == nil {
- m.Tx = []byte{}
+ v := &ResponseFinalizeBlock{}
+ if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
}
+ m.Value = &Response_FinalizeBlock{v}
iNdEx = postIndex
default:
iNdEx = preIndex
@@ -9293,7 +14927,7 @@ func (m *RequestDeliverTx) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *RequestEndBlock) Unmarshal(dAtA []byte) error {
+func (m *ResponseException) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -9316,17 +14950,17 @@ func (m *RequestEndBlock) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: RequestEndBlock: wiretype end group for non-group")
+ return fmt.Errorf("proto: ResponseException: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: RequestEndBlock: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: ResponseException: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Height", wireType)
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Error", wireType)
}
- m.Height = 0
+ var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -9336,111 +14970,24 @@ func (m *RequestEndBlock) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.Height |= int64(b&0x7F) << shift
+ stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- default:
- iNdEx = preIndex
- skippy, err := skipTypes(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
return ErrInvalidLengthTypes
}
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *RequestCommit) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: RequestCommit: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: RequestCommit: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- default:
- iNdEx = preIndex
- skippy, err := skipTypes(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
return ErrInvalidLengthTypes
}
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *RequestListSnapshots) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
- }
- if iNdEx >= l {
+ if postIndex > l {
return io.ErrUnexpectedEOF
}
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: RequestListSnapshots: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: RequestListSnapshots: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
+ m.Error = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipTypes(dAtA[iNdEx:])
@@ -9462,76 +15009,40 @@ func (m *RequestListSnapshots) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *RequestOfferSnapshot) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: RequestOfferSnapshot: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: RequestOfferSnapshot: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Snapshot", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthTypes
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthTypes
+func (m *ResponseEcho) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
}
- if postIndex > l {
+ if iNdEx >= l {
return io.ErrUnexpectedEOF
}
- if m.Snapshot == nil {
- m.Snapshot = &Snapshot{}
- }
- if err := m.Snapshot.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
}
- iNdEx = postIndex
- case 2:
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: ResponseEcho: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: ResponseEcho: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field AppHash", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Message", wireType)
}
- var byteLen int
+ var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -9541,25 +15052,23 @@ func (m *RequestOfferSnapshot) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- byteLen |= int(b&0x7F) << shift
+ stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if byteLen < 0 {
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + byteLen
+ postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.AppHash = append(m.AppHash[:0], dAtA[iNdEx:postIndex]...)
- if m.AppHash == nil {
- m.AppHash = []byte{}
- }
+ m.Message = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
default:
iNdEx = preIndex
@@ -9582,7 +15091,7 @@ func (m *RequestOfferSnapshot) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *RequestLoadSnapshotChunk) Unmarshal(dAtA []byte) error {
+func (m *ResponseFlush) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -9605,69 +15114,12 @@ func (m *RequestLoadSnapshotChunk) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: RequestLoadSnapshotChunk: wiretype end group for non-group")
+ return fmt.Errorf("proto: ResponseFlush: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: RequestLoadSnapshotChunk: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: ResponseFlush: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
- case 1:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Height", wireType)
- }
- m.Height = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Height |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 2:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Format", wireType)
- }
- m.Format = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Format |= uint32(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 3:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Chunk", wireType)
- }
- m.Chunk = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Chunk |= uint32(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
default:
iNdEx = preIndex
skippy, err := skipTypes(dAtA[iNdEx:])
@@ -9689,7 +15141,7 @@ func (m *RequestLoadSnapshotChunk) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *RequestApplySnapshotChunk) Unmarshal(dAtA []byte) error {
+func (m *ResponseInfo) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -9712,17 +15164,17 @@ func (m *RequestApplySnapshotChunk) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: RequestApplySnapshotChunk: wiretype end group for non-group")
+ return fmt.Errorf("proto: ResponseInfo: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: RequestApplySnapshotChunk: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: ResponseInfo: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Index", wireType)
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Data", wireType)
}
- m.Index = 0
+ var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -9732,16 +15184,29 @@ func (m *RequestApplySnapshotChunk) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.Index |= uint32(b&0x7F) << shift
+ stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Data = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
case 2:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Chunk", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Version", wireType)
}
- var byteLen int
+ var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -9751,31 +15216,67 @@ func (m *RequestApplySnapshotChunk) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- byteLen |= int(b&0x7F) << shift
+ stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if byteLen < 0 {
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + byteLen
+ postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Chunk = append(m.Chunk[:0], dAtA[iNdEx:postIndex]...)
- if m.Chunk == nil {
- m.Chunk = []byte{}
- }
+ m.Version = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 3:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field AppVersion", wireType)
+ }
+ m.AppVersion = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.AppVersion |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 4:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field LastBlockHeight", wireType)
+ }
+ m.LastBlockHeight = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.LastBlockHeight |= int64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 5:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Sender", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field LastBlockAppHash", wireType)
}
- var stringLen uint64
+ var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -9785,23 +15286,25 @@ func (m *RequestApplySnapshotChunk) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if byteLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Sender = string(dAtA[iNdEx:postIndex])
+ m.LastBlockAppHash = append(m.LastBlockAppHash[:0], dAtA[iNdEx:postIndex]...)
+ if m.LastBlockAppHash == nil {
+ m.LastBlockAppHash = []byte{}
+ }
iNdEx = postIndex
default:
iNdEx = preIndex
@@ -9824,7 +15327,7 @@ func (m *RequestApplySnapshotChunk) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *Response) Unmarshal(dAtA []byte) error {
+func (m *ResponseInitChain) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -9837,60 +15340,25 @@ func (m *Response) Unmarshal(dAtA []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: Response: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: Response: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Exception", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthTypes
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthTypes
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- v := &ResponseException{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
}
- m.Value = &Response_Exception{v}
- iNdEx = postIndex
- case 2:
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: ResponseInitChain: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: ResponseInitChain: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Echo", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field ConsensusParams", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -9917,17 +15385,18 @@ func (m *Response) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- v := &ResponseEcho{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ if m.ConsensusParams == nil {
+ m.ConsensusParams = &types1.ConsensusParams{}
+ }
+ if err := m.ConsensusParams.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
- m.Value = &Response_Echo{v}
iNdEx = postIndex
case 3:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Flush", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field AppHash", wireType)
}
- var msglen int
+ var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -9937,30 +15406,29 @@ func (m *Response) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
+ if byteLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + msglen
+ postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- v := &ResponseFlush{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
+ m.AppHash = append(m.AppHash[:0], dAtA[iNdEx:postIndex]...)
+ if m.AppHash == nil {
+ m.AppHash = []byte{}
}
- m.Value = &Response_Flush{v}
iNdEx = postIndex
- case 4:
+ case 100:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Info", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field ValidatorSetUpdate", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -9987,15 +15455,13 @@ func (m *Response) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- v := &ResponseInfo{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ if err := m.ValidatorSetUpdate.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
- m.Value = &Response_Info{v}
iNdEx = postIndex
- case 5:
+ case 101:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field InitChain", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field NextCoreChainLockUpdate", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -10022,17 +15488,18 @@ func (m *Response) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- v := &ResponseInitChain{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ if m.NextCoreChainLockUpdate == nil {
+ m.NextCoreChainLockUpdate = &types1.CoreChainLock{}
+ }
+ if err := m.NextCoreChainLockUpdate.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
- m.Value = &Response_InitChain{v}
iNdEx = postIndex
- case 6:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Query", wireType)
+ case 102:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field InitialCoreHeight", wireType)
}
- var msglen int
+ m.InitialCoreHeight = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -10042,32 +15509,66 @@ func (m *Response) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ m.InitialCoreHeight |= uint32(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
- return ErrInvalidLengthTypes
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
}
- postIndex := iNdEx + msglen
- if postIndex < 0 {
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
- if postIndex > l {
+ if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
- v := &ResponseQuery{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *ResponseQuery) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
}
- m.Value = &Response_Query{v}
- iNdEx = postIndex
- case 7:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field BeginBlock", wireType)
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
}
- var msglen int
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: ResponseQuery: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: ResponseQuery: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Code", wireType)
+ }
+ m.Code = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -10077,32 +15578,16 @@ func (m *Response) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ m.Code |= uint32(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
- return ErrInvalidLengthTypes
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthTypes
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- v := &ResponseBeginBlock{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- m.Value = &Response_BeginBlock{v}
- iNdEx = postIndex
- case 8:
+ case 3:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field CheckTx", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Log", wireType)
}
- var msglen int
+ var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -10112,32 +15597,29 @@ func (m *Response) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + msglen
+ postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- v := &ResponseCheckTx{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- m.Value = &Response_CheckTx{v}
+ m.Log = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
- case 9:
+ case 4:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field DeliverTx", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Info", wireType)
}
- var msglen int
+ var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -10147,32 +15629,29 @@ func (m *Response) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + msglen
+ postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- v := &ResponseDeliverTx{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- m.Value = &Response_DeliverTx{v}
+ m.Info = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
- case 10:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field EndBlock", wireType)
+ case 5:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Index", wireType)
}
- var msglen int
+ m.Index = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -10182,32 +15661,16 @@ func (m *Response) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ m.Index |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
- return ErrInvalidLengthTypes
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthTypes
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- v := &ResponseEndBlock{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- m.Value = &Response_EndBlock{v}
- iNdEx = postIndex
- case 11:
+ case 6:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Commit", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Key", wireType)
}
- var msglen int
+ var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -10217,32 +15680,31 @@ func (m *Response) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
+ if byteLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + msglen
+ postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- v := &ResponseCommit{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
+ m.Key = append(m.Key[:0], dAtA[iNdEx:postIndex]...)
+ if m.Key == nil {
+ m.Key = []byte{}
}
- m.Value = &Response_Commit{v}
iNdEx = postIndex
- case 12:
+ case 7:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field ListSnapshots", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Value", wireType)
}
- var msglen int
+ var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -10252,30 +15714,29 @@ func (m *Response) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
+ if byteLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + msglen
+ postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- v := &ResponseListSnapshots{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
+ m.Value = append(m.Value[:0], dAtA[iNdEx:postIndex]...)
+ if m.Value == nil {
+ m.Value = []byte{}
}
- m.Value = &Response_ListSnapshots{v}
iNdEx = postIndex
- case 13:
+ case 8:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field OfferSnapshot", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field ProofOps", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -10302,17 +15763,18 @@ func (m *Response) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- v := &ResponseOfferSnapshot{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ if m.ProofOps == nil {
+ m.ProofOps = &crypto.ProofOps{}
+ }
+ if err := m.ProofOps.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
- m.Value = &Response_OfferSnapshot{v}
iNdEx = postIndex
- case 14:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field LoadSnapshotChunk", wireType)
+ case 9:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Height", wireType)
}
- var msglen int
+ m.Height = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -10322,32 +15784,16 @@ func (m *Response) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ m.Height |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
- return ErrInvalidLengthTypes
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthTypes
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- v := &ResponseLoadSnapshotChunk{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- m.Value = &Response_LoadSnapshotChunk{v}
- iNdEx = postIndex
- case 15:
+ case 10:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field ApplySnapshotChunk", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Codespace", wireType)
}
- var msglen int
+ var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -10357,26 +15803,23 @@ func (m *Response) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + msglen
+ postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- v := &ResponseApplySnapshotChunk{}
- if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- m.Value = &Response_ApplySnapshotChunk{v}
+ m.Codespace = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
default:
iNdEx = preIndex
@@ -10399,7 +15842,7 @@ func (m *Response) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *ResponseException) Unmarshal(dAtA []byte) error {
+func (m *ResponseBeginBlock) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -10422,17 +15865,17 @@ func (m *ResponseException) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: ResponseException: wiretype end group for non-group")
+ return fmt.Errorf("proto: ResponseBeginBlock: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: ResponseException: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: ResponseBeginBlock: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Error", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Events", wireType)
}
- var stringLen uint64
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -10442,23 +15885,25 @@ func (m *ResponseException) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if msglen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Error = string(dAtA[iNdEx:postIndex])
+ m.Events = append(m.Events, Event{})
+ if err := m.Events[len(m.Events)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
iNdEx = postIndex
default:
iNdEx = preIndex
@@ -10481,7 +15926,7 @@ func (m *ResponseException) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *ResponseEcho) Unmarshal(dAtA []byte) error {
+func (m *ResponseCheckTx) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -10504,17 +15949,36 @@ func (m *ResponseEcho) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: ResponseEcho: wiretype end group for non-group")
+ return fmt.Errorf("proto: ResponseCheckTx: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: ResponseEcho: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: ResponseCheckTx: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Code", wireType)
+ }
+ m.Code = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Code |= uint32(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 2:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Message", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Data", wireType)
}
- var stringLen uint64
+ var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -10524,129 +15988,133 @@ func (m *ResponseEcho) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if byteLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Message = string(dAtA[iNdEx:postIndex])
+ m.Data = append(m.Data[:0], dAtA[iNdEx:postIndex]...)
+ if m.Data == nil {
+ m.Data = []byte{}
+ }
iNdEx = postIndex
- default:
- iNdEx = preIndex
- skippy, err := skipTypes(dAtA[iNdEx:])
- if err != nil {
- return err
+ case 3:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Log", wireType)
}
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return ErrInvalidLengthTypes
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
}
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthTypes
}
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *ResponseFlush) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
}
- if iNdEx >= l {
+ if postIndex > l {
return io.ErrUnexpectedEOF
}
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
+ m.Log = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 4:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Info", wireType)
}
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: ResponseFlush: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: ResponseFlush: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- default:
- iNdEx = preIndex
- skippy, err := skipTypes(dAtA[iNdEx:])
- if err != nil {
- return err
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
}
- if (skippy < 0) || (iNdEx+skippy) < 0 {
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
return ErrInvalidLengthTypes
}
- if (iNdEx + skippy) > l {
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
return io.ErrUnexpectedEOF
}
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *ResponseInfo) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
+ m.Info = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 5:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field GasWanted", wireType)
+ }
+ m.GasWanted = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.GasWanted |= int64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
}
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
+ case 6:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field GasUsed", wireType)
}
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
+ m.GasUsed = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.GasUsed |= int64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
}
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: ResponseInfo: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: ResponseInfo: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
+ case 7:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Data", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Events", wireType)
}
- var stringLen uint64
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -10656,27 +16124,29 @@ func (m *ResponseInfo) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if msglen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Data = string(dAtA[iNdEx:postIndex])
+ m.Events = append(m.Events, Event{})
+ if err := m.Events[len(m.Events)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
iNdEx = postIndex
- case 2:
+ case 8:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Version", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Codespace", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
@@ -10704,13 +16174,13 @@ func (m *ResponseInfo) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Version = string(dAtA[iNdEx:postIndex])
+ m.Codespace = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
- case 3:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field AppVersion", wireType)
+ case 9:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Sender", wireType)
}
- m.AppVersion = 0
+ var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -10720,16 +16190,29 @@ func (m *ResponseInfo) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.AppVersion |= uint64(b&0x7F) << shift
+ stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- case 4:
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Sender = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 10:
if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field LastBlockHeight", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Priority", wireType)
}
- m.LastBlockHeight = 0
+ m.Priority = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -10739,16 +16222,16 @@ func (m *ResponseInfo) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.LastBlockHeight |= int64(b&0x7F) << shift
+ m.Priority |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- case 5:
+ case 11:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field LastBlockAppHash", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field MempoolError", wireType)
}
- var byteLen int
+ var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -10758,25 +16241,23 @@ func (m *ResponseInfo) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- byteLen |= int(b&0x7F) << shift
+ stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if byteLen < 0 {
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + byteLen
+ postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.LastBlockAppHash = append(m.LastBlockAppHash[:0], dAtA[iNdEx:postIndex]...)
- if m.LastBlockAppHash == nil {
- m.LastBlockAppHash = []byte{}
- }
+ m.MempoolError = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
default:
iNdEx = preIndex
@@ -10799,7 +16280,7 @@ func (m *ResponseInfo) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *ResponseInitChain) Unmarshal(dAtA []byte) error {
+func (m *ResponseDeliverTx) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -10822,17 +16303,36 @@ func (m *ResponseInitChain) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: ResponseInitChain: wiretype end group for non-group")
+ return fmt.Errorf("proto: ResponseDeliverTx: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: ResponseInitChain: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: ResponseDeliverTx: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Code", wireType)
+ }
+ m.Code = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Code |= uint32(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 2:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field ConsensusParams", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Data", wireType)
}
- var msglen int
+ var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -10842,33 +16342,31 @@ func (m *ResponseInitChain) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
+ if byteLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + msglen
+ postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- if m.ConsensusParams == nil {
- m.ConsensusParams = &types1.ConsensusParams{}
- }
- if err := m.ConsensusParams.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
+ m.Data = append(m.Data[:0], dAtA[iNdEx:postIndex]...)
+ if m.Data == nil {
+ m.Data = []byte{}
}
iNdEx = postIndex
case 3:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field AppHash", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Log", wireType)
}
- var byteLen int
+ var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -10878,31 +16376,29 @@ func (m *ResponseInitChain) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- byteLen |= int(b&0x7F) << shift
+ stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if byteLen < 0 {
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + byteLen
+ postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.AppHash = append(m.AppHash[:0], dAtA[iNdEx:postIndex]...)
- if m.AppHash == nil {
- m.AppHash = []byte{}
- }
+ m.Log = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
- case 100:
+ case 4:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field ValidatorSetUpdate", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Info", wireType)
}
- var msglen int
+ var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -10912,28 +16408,65 @@ func (m *ResponseInitChain) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + msglen
+ postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- if err := m.ValidatorSetUpdate.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
+ m.Info = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
- case 101:
+ case 5:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field GasWanted", wireType)
+ }
+ m.GasWanted = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.GasWanted |= int64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 6:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field GasUsed", wireType)
+ }
+ m.GasUsed = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.GasUsed |= int64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 7:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field NextCoreChainLockUpdate", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Events", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -10960,18 +16493,16 @@ func (m *ResponseInitChain) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- if m.NextCoreChainLockUpdate == nil {
- m.NextCoreChainLockUpdate = &types1.CoreChainLock{}
- }
- if err := m.NextCoreChainLockUpdate.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ m.Events = append(m.Events, Event{})
+ if err := m.Events[len(m.Events)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
- case 102:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field InitialCoreHeight", wireType)
+ case 8:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Codespace", wireType)
}
- m.InitialCoreHeight = 0
+ var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -10981,11 +16512,24 @@ func (m *ResponseInitChain) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.InitialCoreHeight |= uint32(b&0x7F) << shift
+ stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Codespace = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipTypes(dAtA[iNdEx:])
@@ -11007,7 +16551,7 @@ func (m *ResponseInitChain) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *ResponseQuery) Unmarshal(dAtA []byte) error {
+func (m *ResponseEndBlock) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -11030,17 +16574,17 @@ func (m *ResponseQuery) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: ResponseQuery: wiretype end group for non-group")
+ return fmt.Errorf("proto: ResponseEndBlock: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: ResponseQuery: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: ResponseEndBlock: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
- case 1:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Code", wireType)
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ConsensusParamUpdates", wireType)
}
- m.Code = 0
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -11050,16 +16594,33 @@ func (m *ResponseQuery) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.Code |= uint32(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.ConsensusParamUpdates == nil {
+ m.ConsensusParamUpdates = &types1.ConsensusParams{}
+ }
+ if err := m.ConsensusParamUpdates.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
case 3:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Log", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Events", wireType)
}
- var stringLen uint64
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -11069,29 +16630,31 @@ func (m *ResponseQuery) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if msglen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Log = string(dAtA[iNdEx:postIndex])
+ m.Events = append(m.Events, Event{})
+ if err := m.Events[len(m.Events)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
iNdEx = postIndex
- case 4:
+ case 100:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Info", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field NextCoreChainLockUpdate", wireType)
}
- var stringLen uint64
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -11101,48 +16664,33 @@ func (m *ResponseQuery) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if msglen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Info = string(dAtA[iNdEx:postIndex])
- iNdEx = postIndex
- case 5:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Index", wireType)
+ if m.NextCoreChainLockUpdate == nil {
+ m.NextCoreChainLockUpdate = &types1.CoreChainLock{}
}
- m.Index = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Index |= int64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
+ if err := m.NextCoreChainLockUpdate.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
}
- case 6:
+ iNdEx = postIndex
+ case 101:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Key", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field ValidatorSetUpdate", wireType)
}
- var byteLen int
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -11152,29 +16700,81 @@ func (m *ResponseQuery) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- byteLen |= int(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if byteLen < 0 {
+ if msglen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + byteLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Key = append(m.Key[:0], dAtA[iNdEx:postIndex]...)
- if m.Key == nil {
- m.Key = []byte{}
+ if m.ValidatorSetUpdate == nil {
+ m.ValidatorSetUpdate = &ValidatorSetUpdate{}
+ }
+ if err := m.ValidatorSetUpdate.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
}
iNdEx = postIndex
- case 7:
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *ResponseCommit) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: ResponseCommit: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: ResponseCommit: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 2:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Value", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Data", wireType)
}
var byteLen int
for shift := uint(0); ; shift += 7 {
@@ -11201,16 +16801,16 @@ func (m *ResponseQuery) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Value = append(m.Value[:0], dAtA[iNdEx:postIndex]...)
- if m.Value == nil {
- m.Value = []byte{}
+ m.Data = append(m.Data[:0], dAtA[iNdEx:postIndex]...)
+ if m.Data == nil {
+ m.Data = []byte{}
}
iNdEx = postIndex
- case 8:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field ProofOps", wireType)
+ case 3:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field RetainHeight", wireType)
}
- var msglen int
+ m.RetainHeight = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -11220,52 +16820,66 @@ func (m *ResponseQuery) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ m.RetainHeight |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
- return ErrInvalidLengthTypes
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
}
- postIndex := iNdEx + msglen
- if postIndex < 0 {
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
- if postIndex > l {
+ if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
- if m.ProofOps == nil {
- m.ProofOps = &crypto.ProofOps{}
- }
- if err := m.ProofOps.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *ResponseListSnapshots) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
}
- iNdEx = postIndex
- case 9:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Height", wireType)
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
}
- m.Height = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Height |= int64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
}
- case 10:
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: ResponseListSnapshots: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: ResponseListSnapshots: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Codespace", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Snapshots", wireType)
}
- var stringLen uint64
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -11275,23 +16889,25 @@ func (m *ResponseQuery) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if msglen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Codespace = string(dAtA[iNdEx:postIndex])
+ m.Snapshots = append(m.Snapshots, &Snapshot{})
+ if err := m.Snapshots[len(m.Snapshots)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
iNdEx = postIndex
default:
iNdEx = preIndex
@@ -11314,7 +16930,7 @@ func (m *ResponseQuery) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *ResponseBeginBlock) Unmarshal(dAtA []byte) error {
+func (m *ResponseOfferSnapshot) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -11337,17 +16953,17 @@ func (m *ResponseBeginBlock) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: ResponseBeginBlock: wiretype end group for non-group")
+ return fmt.Errorf("proto: ResponseOfferSnapshot: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: ResponseBeginBlock: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: ResponseOfferSnapshot: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Events", wireType)
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Result", wireType)
}
- var msglen int
+ m.Result = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -11357,26 +16973,11 @@ func (m *ResponseBeginBlock) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ m.Result |= ResponseOfferSnapshot_Result(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
- return ErrInvalidLengthTypes
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthTypes
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.Events = append(m.Events, Event{})
- if err := m.Events[len(m.Events)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipTypes(dAtA[iNdEx:])
@@ -11398,7 +16999,7 @@ func (m *ResponseBeginBlock) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *ResponseCheckTx) Unmarshal(dAtA []byte) error {
+func (m *ResponseLoadSnapshotChunk) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -11421,34 +17022,15 @@ func (m *ResponseCheckTx) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: ResponseCheckTx: wiretype end group for non-group")
+ return fmt.Errorf("proto: ResponseLoadSnapshotChunk: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: ResponseCheckTx: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: ResponseLoadSnapshotChunk: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Code", wireType)
- }
- m.Code = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Code |= uint32(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 2:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Data", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Chunk", wireType)
}
var byteLen int
for shift := uint(0); ; shift += 7 {
@@ -11475,48 +17057,66 @@ func (m *ResponseCheckTx) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Data = append(m.Data[:0], dAtA[iNdEx:postIndex]...)
- if m.Data == nil {
- m.Data = []byte{}
+ m.Chunk = append(m.Chunk[:0], dAtA[iNdEx:postIndex]...)
+ if m.Chunk == nil {
+ m.Chunk = []byte{}
}
iNdEx = postIndex
- case 3:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Log", wireType)
- }
- var stringLen uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- stringLen |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + intStringLen
- if postIndex < 0 {
- return ErrInvalidLengthTypes
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
}
- if postIndex > l {
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *ResponseApplySnapshotChunk) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
return io.ErrUnexpectedEOF
}
- m.Log = string(dAtA[iNdEx:postIndex])
- iNdEx = postIndex
- case 4:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Info", wireType)
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
}
- var stringLen uint64
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: ResponseApplySnapshotChunk: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: ResponseApplySnapshotChunk: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Result", wireType)
+ }
+ m.Result = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -11526,48 +17126,92 @@ func (m *ResponseCheckTx) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ m.Result |= ResponseApplySnapshotChunk_Result(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
- return ErrInvalidLengthTypes
- }
- postIndex := iNdEx + intStringLen
- if postIndex < 0 {
- return ErrInvalidLengthTypes
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.Info = string(dAtA[iNdEx:postIndex])
- iNdEx = postIndex
- case 5:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field GasWanted", wireType)
- }
- m.GasWanted = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
+ case 2:
+ if wireType == 0 {
+ var v uint32
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ v |= uint32(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
}
- if iNdEx >= l {
+ m.RefetchChunks = append(m.RefetchChunks, v)
+ } else if wireType == 2 {
+ var packedLen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ packedLen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if packedLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + packedLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
return io.ErrUnexpectedEOF
}
- b := dAtA[iNdEx]
- iNdEx++
- m.GasWanted |= int64(b&0x7F) << shift
- if b < 0x80 {
- break
+ var elementCount int
+ var count int
+ for _, integer := range dAtA[iNdEx:postIndex] {
+ if integer < 128 {
+ count++
+ }
+ }
+ elementCount = count
+ if elementCount != 0 && len(m.RefetchChunks) == 0 {
+ m.RefetchChunks = make([]uint32, 0, elementCount)
+ }
+ for iNdEx < postIndex {
+ var v uint32
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ v |= uint32(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ m.RefetchChunks = append(m.RefetchChunks, v)
}
+ } else {
+ return fmt.Errorf("proto: wrong wireType = %d for field RefetchChunks", wireType)
}
- case 6:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field GasUsed", wireType)
+ case 3:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field RejectSenders", wireType)
}
- m.GasUsed = 0
+ var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -11577,14 +17221,77 @@ func (m *ResponseCheckTx) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.GasUsed |= int64(b&0x7F) << shift
+ stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- case 7:
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.RejectSenders = append(m.RejectSenders, string(dAtA[iNdEx:postIndex]))
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *ResponsePrepareProposal) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: ResponsePrepareProposal: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: ResponsePrepareProposal: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Events", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field TxRecords", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -11611,16 +17318,16 @@ func (m *ResponseCheckTx) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Events = append(m.Events, Event{})
- if err := m.Events[len(m.Events)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ m.TxRecords = append(m.TxRecords, &TxRecord{})
+ if err := m.TxRecords[len(m.TxRecords)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
- case 8:
+ case 2:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Codespace", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field AppHash", wireType)
}
- var stringLen uint64
+ var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -11630,29 +17337,31 @@ func (m *ResponseCheckTx) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if byteLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Codespace = string(dAtA[iNdEx:postIndex])
+ m.AppHash = append(m.AppHash[:0], dAtA[iNdEx:postIndex]...)
+ if m.AppHash == nil {
+ m.AppHash = []byte{}
+ }
iNdEx = postIndex
- case 9:
+ case 3:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Sender", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field TxResults", wireType)
}
- var stringLen uint64
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -11662,29 +17371,31 @@ func (m *ResponseCheckTx) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if msglen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Sender = string(dAtA[iNdEx:postIndex])
+ m.TxResults = append(m.TxResults, &ExecTxResult{})
+ if err := m.TxResults[len(m.TxResults)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
iNdEx = postIndex
- case 10:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Priority", wireType)
+ case 4:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ValidatorUpdates", wireType)
}
- m.Priority = 0
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -11694,16 +17405,31 @@ func (m *ResponseCheckTx) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.Priority |= int64(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- case 11:
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.ValidatorUpdates = append(m.ValidatorUpdates, &ValidatorUpdate{})
+ if err := m.ValidatorUpdates[len(m.ValidatorUpdates)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 5:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field MempoolError", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field ConsensusParamUpdates", wireType)
}
- var stringLen uint64
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -11713,23 +17439,27 @@ func (m *ResponseCheckTx) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if msglen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.MempoolError = string(dAtA[iNdEx:postIndex])
+ if m.ConsensusParamUpdates == nil {
+ m.ConsensusParamUpdates = &types1.ConsensusParams{}
+ }
+ if err := m.ConsensusParamUpdates.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
iNdEx = postIndex
default:
iNdEx = preIndex
@@ -11752,7 +17482,7 @@ func (m *ResponseCheckTx) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *ResponseDeliverTx) Unmarshal(dAtA []byte) error {
+func (m *ResponseProcessProposal) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -11775,17 +17505,17 @@ func (m *ResponseDeliverTx) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: ResponseDeliverTx: wiretype end group for non-group")
+ return fmt.Errorf("proto: ResponseProcessProposal: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: ResponseDeliverTx: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: ResponseProcessProposal: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Code", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType)
}
- m.Code = 0
+ m.Status = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -11795,14 +17525,14 @@ func (m *ResponseDeliverTx) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.Code |= uint32(b&0x7F) << shift
+ m.Status |= ResponseProcessProposal_ProposalStatus(b&0x7F) << shift
if b < 0x80 {
break
}
}
case 2:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Data", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field AppHash", wireType)
}
var byteLen int
for shift := uint(0); ; shift += 7 {
@@ -11829,16 +17559,16 @@ func (m *ResponseDeliverTx) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Data = append(m.Data[:0], dAtA[iNdEx:postIndex]...)
- if m.Data == nil {
- m.Data = []byte{}
+ m.AppHash = append(m.AppHash[:0], dAtA[iNdEx:postIndex]...)
+ if m.AppHash == nil {
+ m.AppHash = []byte{}
}
iNdEx = postIndex
case 3:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Log", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field TxResults", wireType)
}
- var stringLen uint64
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -11848,29 +17578,31 @@ func (m *ResponseDeliverTx) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if msglen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Log = string(dAtA[iNdEx:postIndex])
+ m.TxResults = append(m.TxResults, &ExecTxResult{})
+ if err := m.TxResults[len(m.TxResults)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
iNdEx = postIndex
case 4:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Info", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field ValidatorUpdates", wireType)
}
- var stringLen uint64
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -11880,29 +17612,31 @@ func (m *ResponseDeliverTx) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if msglen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Info = string(dAtA[iNdEx:postIndex])
+ m.ValidatorUpdates = append(m.ValidatorUpdates, &ValidatorUpdate{})
+ if err := m.ValidatorUpdates[len(m.ValidatorUpdates)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
iNdEx = postIndex
case 5:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field GasWanted", wireType)
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ConsensusParamUpdates", wireType)
}
- m.GasWanted = 0
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -11912,35 +17646,83 @@ func (m *ResponseDeliverTx) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.GasWanted |= int64(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- case 6:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field GasUsed", wireType)
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
}
- m.GasUsed = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.GasUsed |= int64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.ConsensusParamUpdates == nil {
+ m.ConsensusParamUpdates = &types1.ConsensusParams{}
+ }
+ if err := m.ConsensusParamUpdates.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *ResponseExtendVote) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
}
- case 7:
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: ResponseExtendVote: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: ResponseExtendVote: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Events", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field VoteExtension", wireType)
}
- var msglen int
+ var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -11950,31 +17732,81 @@ func (m *ResponseDeliverTx) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
+ if byteLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + msglen
+ postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Events = append(m.Events, Event{})
- if err := m.Events[len(m.Events)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
+ m.VoteExtension = append(m.VoteExtension[:0], dAtA[iNdEx:postIndex]...)
+ if m.VoteExtension == nil {
+ m.VoteExtension = []byte{}
}
iNdEx = postIndex
- case 8:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Codespace", wireType)
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
}
- var stringLen uint64
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *ResponseVerifyVoteExtension) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: ResponseVerifyVoteExtension: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: ResponseVerifyVoteExtension: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType)
+ }
+ m.Status = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -11984,24 +17816,11 @@ func (m *ResponseDeliverTx) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ m.Status |= ResponseVerifyVoteExtension_VerifyStatus(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
- return ErrInvalidLengthTypes
- }
- postIndex := iNdEx + intStringLen
- if postIndex < 0 {
- return ErrInvalidLengthTypes
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.Codespace = string(dAtA[iNdEx:postIndex])
- iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipTypes(dAtA[iNdEx:])
@@ -12023,7 +17842,7 @@ func (m *ResponseDeliverTx) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *ResponseEndBlock) Unmarshal(dAtA []byte) error {
+func (m *ResponseFinalizeBlock) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -12046,15 +17865,15 @@ func (m *ResponseEndBlock) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: ResponseEndBlock: wiretype end group for non-group")
+ return fmt.Errorf("proto: ResponseFinalizeBlock: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: ResponseEndBlock: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: ResponseFinalizeBlock: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
- case 2:
+ case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field ConsensusParamUpdates", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Events", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -12081,16 +17900,48 @@ func (m *ResponseEndBlock) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- if m.ConsensusParamUpdates == nil {
- m.ConsensusParamUpdates = &types1.ConsensusParams{}
+ m.Events = append(m.Events, Event{})
+ if err := m.Events[len(m.Events)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
}
- if err := m.ConsensusParamUpdates.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ iNdEx = postIndex
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field TxResults", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.TxResults = append(m.TxResults, &ExecTxResult{})
+ if err := m.TxResults[len(m.TxResults)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
- case 3:
+ case 4:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Events", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field ConsensusParamUpdates", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -12117,11 +17968,66 @@ func (m *ResponseEndBlock) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Events = append(m.Events, Event{})
- if err := m.Events[len(m.Events)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ if m.ConsensusParamUpdates == nil {
+ m.ConsensusParamUpdates = &types1.ConsensusParams{}
+ }
+ if err := m.ConsensusParamUpdates.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
+ case 5:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field AppHash", wireType)
+ }
+ var byteLen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ byteLen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if byteLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + byteLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.AppHash = append(m.AppHash[:0], dAtA[iNdEx:postIndex]...)
+ if m.AppHash == nil {
+ m.AppHash = []byte{}
+ }
+ iNdEx = postIndex
+ case 6:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field RetainHeight", wireType)
+ }
+ m.RetainHeight = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.RetainHeight |= int64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
case 100:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field NextCoreChainLockUpdate", wireType)
@@ -12215,7 +18121,7 @@ func (m *ResponseEndBlock) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *ResponseCommit) Unmarshal(dAtA []byte) error {
+func (m *CommitInfo) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -12238,15 +18144,34 @@ func (m *ResponseCommit) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: ResponseCommit: wiretype end group for non-group")
+ return fmt.Errorf("proto: CommitInfo: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: ResponseCommit: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: CommitInfo: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
- case 2:
+ case 1:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Round", wireType)
+ }
+ m.Round = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Round |= int32(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 3:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Data", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field QuorumHash", wireType)
}
var byteLen int
for shift := uint(0); ; shift += 7 {
@@ -12273,16 +18198,16 @@ func (m *ResponseCommit) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Data = append(m.Data[:0], dAtA[iNdEx:postIndex]...)
- if m.Data == nil {
- m.Data = []byte{}
+ m.QuorumHash = append(m.QuorumHash[:0], dAtA[iNdEx:postIndex]...)
+ if m.QuorumHash == nil {
+ m.QuorumHash = []byte{}
}
iNdEx = postIndex
- case 3:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field RetainHeight", wireType)
+ case 4:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field BlockSignature", wireType)
}
- m.RetainHeight = 0
+ var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -12292,66 +18217,31 @@ func (m *ResponseCommit) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.RetainHeight |= int64(b&0x7F) << shift
+ byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- default:
- iNdEx = preIndex
- skippy, err := skipTypes(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return ErrInvalidLengthTypes
- }
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
+ if byteLen < 0 {
+ return ErrInvalidLengthTypes
}
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *ResponseListSnapshots) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
+ postIndex := iNdEx + byteLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
}
- if iNdEx >= l {
+ if postIndex > l {
return io.ErrUnexpectedEOF
}
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
+ m.BlockSignature = append(m.BlockSignature[:0], dAtA[iNdEx:postIndex]...)
+ if m.BlockSignature == nil {
+ m.BlockSignature = []byte{}
}
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: ResponseListSnapshots: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: ResponseListSnapshots: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
+ iNdEx = postIndex
+ case 5:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Snapshots", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field StateSignature", wireType)
}
- var msglen int
+ var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -12361,24 +18251,24 @@ func (m *ResponseListSnapshots) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
+ if byteLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + msglen
+ postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Snapshots = append(m.Snapshots, &Snapshot{})
- if err := m.Snapshots[len(m.Snapshots)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
+ m.StateSignature = append(m.StateSignature[:0], dAtA[iNdEx:postIndex]...)
+ if m.StateSignature == nil {
+ m.StateSignature = []byte{}
}
iNdEx = postIndex
default:
@@ -12402,7 +18292,7 @@ func (m *ResponseListSnapshots) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *ResponseOfferSnapshot) Unmarshal(dAtA []byte) error {
+func (m *ExtendedCommitInfo) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -12425,17 +18315,17 @@ func (m *ResponseOfferSnapshot) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: ResponseOfferSnapshot: wiretype end group for non-group")
+ return fmt.Errorf("proto: ExtendedCommitInfo: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: ResponseOfferSnapshot: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: ExtendedCommitInfo: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Result", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Round", wireType)
}
- m.Result = 0
+ m.Round = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -12445,66 +18335,16 @@ func (m *ResponseOfferSnapshot) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.Result |= ResponseOfferSnapshot_Result(b&0x7F) << shift
+ m.Round |= int32(b&0x7F) << shift
if b < 0x80 {
break
}
}
- default:
- iNdEx = preIndex
- skippy, err := skipTypes(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return ErrInvalidLengthTypes
- }
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *ResponseLoadSnapshotChunk) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: ResponseLoadSnapshotChunk: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: ResponseLoadSnapshotChunk: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
+ case 2:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Chunk", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Votes", wireType)
}
- var byteLen int
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -12514,24 +18354,24 @@ func (m *ResponseLoadSnapshotChunk) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- byteLen |= int(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if byteLen < 0 {
+ if msglen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + byteLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Chunk = append(m.Chunk[:0], dAtA[iNdEx:postIndex]...)
- if m.Chunk == nil {
- m.Chunk = []byte{}
+ m.Votes = append(m.Votes, ExtendedVoteInfo{})
+ if err := m.Votes[len(m.Votes)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
}
iNdEx = postIndex
default:
@@ -12555,7 +18395,7 @@ func (m *ResponseLoadSnapshotChunk) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *ResponseApplySnapshotChunk) Unmarshal(dAtA []byte) error {
+func (m *Event) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -12578,17 +18418,17 @@ func (m *ResponseApplySnapshotChunk) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: ResponseApplySnapshotChunk: wiretype end group for non-group")
+ return fmt.Errorf("proto: Event: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: ResponseApplySnapshotChunk: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: Event: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Result", wireType)
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType)
}
- m.Result = 0
+ var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -12598,92 +18438,29 @@ func (m *ResponseApplySnapshotChunk) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.Result |= ResponseApplySnapshotChunk_Result(b&0x7F) << shift
+ stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- case 2:
- if wireType == 0 {
- var v uint32
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- v |= uint32(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- m.RefetchChunks = append(m.RefetchChunks, v)
- } else if wireType == 2 {
- var packedLen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- packedLen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if packedLen < 0 {
- return ErrInvalidLengthTypes
- }
- postIndex := iNdEx + packedLen
- if postIndex < 0 {
- return ErrInvalidLengthTypes
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- var elementCount int
- var count int
- for _, integer := range dAtA[iNdEx:postIndex] {
- if integer < 128 {
- count++
- }
- }
- elementCount = count
- if elementCount != 0 && len(m.RefetchChunks) == 0 {
- m.RefetchChunks = make([]uint32, 0, elementCount)
- }
- for iNdEx < postIndex {
- var v uint32
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- v |= uint32(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- m.RefetchChunks = append(m.RefetchChunks, v)
- }
- } else {
- return fmt.Errorf("proto: wrong wireType = %d for field RefetchChunks", wireType)
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthTypes
}
- case 3:
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Type = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 2:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field RejectSenders", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Attributes", wireType)
}
- var stringLen uint64
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -12693,23 +18470,25 @@ func (m *ResponseApplySnapshotChunk) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if msglen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.RejectSenders = append(m.RejectSenders, string(dAtA[iNdEx:postIndex]))
+ m.Attributes = append(m.Attributes, EventAttribute{})
+ if err := m.Attributes[len(m.Attributes)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
iNdEx = postIndex
default:
iNdEx = preIndex
@@ -12732,7 +18511,7 @@ func (m *ResponseApplySnapshotChunk) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *LastCommitInfo) Unmarshal(dAtA []byte) error {
+func (m *EventAttribute) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -12755,36 +18534,17 @@ func (m *LastCommitInfo) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: LastCommitInfo: wiretype end group for non-group")
+ return fmt.Errorf("proto: EventAttribute: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: LastCommitInfo: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: EventAttribute: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Round", wireType)
- }
- m.Round = 0
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- m.Round |= int32(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- case 3:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field QuorumHash", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Key", wireType)
}
- var byteLen int
+ var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -12794,31 +18554,29 @@ func (m *LastCommitInfo) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- byteLen |= int(b&0x7F) << shift
+ stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if byteLen < 0 {
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + byteLen
+ postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.QuorumHash = append(m.QuorumHash[:0], dAtA[iNdEx:postIndex]...)
- if m.QuorumHash == nil {
- m.QuorumHash = []byte{}
- }
+ m.Key = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
- case 4:
+ case 2:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field BlockSignature", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Value", wireType)
}
- var byteLen int
+ var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -12828,31 +18586,29 @@ func (m *LastCommitInfo) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- byteLen |= int(b&0x7F) << shift
+ stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if byteLen < 0 {
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + byteLen
+ postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.BlockSignature = append(m.BlockSignature[:0], dAtA[iNdEx:postIndex]...)
- if m.BlockSignature == nil {
- m.BlockSignature = []byte{}
- }
+ m.Value = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
- case 5:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field StateSignature", wireType)
+ case 3:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Index", wireType)
}
- var byteLen int
+ var v int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -12862,26 +18618,12 @@ func (m *LastCommitInfo) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- byteLen |= int(b&0x7F) << shift
+ v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if byteLen < 0 {
- return ErrInvalidLengthTypes
- }
- postIndex := iNdEx + byteLen
- if postIndex < 0 {
- return ErrInvalidLengthTypes
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.StateSignature = append(m.StateSignature[:0], dAtA[iNdEx:postIndex]...)
- if m.StateSignature == nil {
- m.StateSignature = []byte{}
- }
- iNdEx = postIndex
+ m.Index = bool(v != 0)
default:
iNdEx = preIndex
skippy, err := skipTypes(dAtA[iNdEx:])
@@ -12903,7 +18645,7 @@ func (m *LastCommitInfo) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *Event) Unmarshal(dAtA []byte) error {
+func (m *ExecTxResult) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -12926,15 +18668,68 @@ func (m *Event) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: Event: wiretype end group for non-group")
+ return fmt.Errorf("proto: ExecTxResult: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: Event: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: ExecTxResult: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Code", wireType)
+ }
+ m.Code = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Code |= uint32(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 2:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Data", wireType)
+ }
+ var byteLen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ byteLen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if byteLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + byteLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Data = append(m.Data[:0], dAtA[iNdEx:postIndex]...)
+ if m.Data == nil {
+ m.Data = []byte{}
+ }
+ iNdEx = postIndex
+ case 3:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Log", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
@@ -12962,13 +18757,13 @@ func (m *Event) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Type = string(dAtA[iNdEx:postIndex])
+ m.Log = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
- case 2:
+ case 4:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Attributes", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Info", wireType)
}
- var msglen int
+ var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -12978,81 +18773,67 @@ func (m *Event) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + msglen
+ postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Attributes = append(m.Attributes, EventAttribute{})
- if err := m.Attributes[len(m.Attributes)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
+ m.Info = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
- default:
- iNdEx = preIndex
- skippy, err := skipTypes(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if (skippy < 0) || (iNdEx+skippy) < 0 {
- return ErrInvalidLengthTypes
- }
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
+ case 5:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field GasWanted", wireType)
}
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *EventAttribute) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
+ m.GasWanted = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.GasWanted |= int64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
}
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
+ case 6:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field GasUsed", wireType)
}
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
+ m.GasUsed = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.GasUsed |= int64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
}
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: EventAttribute: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: EventAttribute: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
+ case 7:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Key", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Events", wireType)
}
- var stringLen uint64
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
@@ -13062,27 +18843,29 @@ func (m *EventAttribute) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if msglen < 0 {
return ErrInvalidLengthTypes
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Key = string(dAtA[iNdEx:postIndex])
+ m.Events = append(m.Events, Event{})
+ if err := m.Events[len(m.Events)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
iNdEx = postIndex
- case 2:
+ case 8:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Value", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Codespace", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
@@ -13110,28 +18893,8 @@ func (m *EventAttribute) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Value = string(dAtA[iNdEx:postIndex])
+ m.Codespace = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
- case 3:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Index", wireType)
- }
- var v int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowTypes
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- v |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- m.Index = bool(v != 0)
default:
iNdEx = preIndex
skippy, err := skipTypes(dAtA[iNdEx:])
@@ -13308,6 +19071,109 @@ func (m *TxResult) Unmarshal(dAtA []byte) error {
}
return nil
}
+func (m *TxRecord) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: TxRecord: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: TxRecord: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Action", wireType)
+ }
+ m.Action = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Action |= TxRecord_TxAction(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Tx", wireType)
+ }
+ var byteLen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ byteLen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if byteLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + byteLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Tx = append(m.Tx[:0], dAtA[iNdEx:postIndex]...)
+ if m.Tx == nil {
+ m.Tx = []byte{}
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
func (m *Validator) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
@@ -14003,7 +19869,144 @@ func (m *VoteInfo) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *Evidence) Unmarshal(dAtA []byte) error {
+func (m *ExtendedVoteInfo) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: ExtendedVoteInfo: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: ExtendedVoteInfo: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Validator", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if err := m.Validator.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 2:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field SignedLastBlock", wireType)
+ }
+ var v int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ v |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ m.SignedLastBlock = bool(v != 0)
+ case 3:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field VoteExtension", wireType)
+ }
+ var byteLen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowTypes
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ byteLen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if byteLen < 0 {
+ return ErrInvalidLengthTypes
+ }
+ postIndex := iNdEx + byteLen
+ if postIndex < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.VoteExtension = append(m.VoteExtension[:0], dAtA[iNdEx:postIndex]...)
+ if m.VoteExtension == nil {
+ m.VoteExtension = []byte{}
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipTypes(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthTypes
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *Misbehavior) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -14026,10 +20029,10 @@ func (m *Evidence) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: Evidence: wiretype end group for non-group")
+ return fmt.Errorf("proto: Misbehavior: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: Evidence: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: Misbehavior: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
@@ -14046,7 +20049,7 @@ func (m *Evidence) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.Type |= EvidenceType(b&0x7F) << shift
+ m.Type |= MisbehaviorType(b&0x7F) << shift
if b < 0x80 {
break
}
diff --git a/abci/types/types_test.go b/abci/types/types_test.go
new file mode 100644
index 0000000000..f79a244544
--- /dev/null
+++ b/abci/types/types_test.go
@@ -0,0 +1,74 @@
+package types_test
+
+import (
+ "testing"
+
+ "github.com/stretchr/testify/assert"
+ "github.com/stretchr/testify/require"
+
+ abci "github.com/tendermint/tendermint/abci/types"
+ "github.com/tendermint/tendermint/crypto/merkle"
+)
+
+func TestHashAndProveResults(t *testing.T) {
+ trs := []*abci.ExecTxResult{
+ // Note, these tests rely on the first two entries being in this order.
+ {Code: 0, Data: nil},
+ {Code: 0, Data: []byte{}},
+
+ {Code: 0, Data: []byte("one")},
+ {Code: 14, Data: nil},
+ {Code: 14, Data: []byte("foo")},
+ {Code: 14, Data: []byte("bar")},
+ }
+
+ // Nil and []byte{} should produce the same bytes
+ bz0, err := trs[0].Marshal()
+ require.NoError(t, err)
+ bz1, err := trs[1].Marshal()
+ require.NoError(t, err)
+ require.Equal(t, bz0, bz1)
+
+ // Make sure that we can get a root hash from results and verify proofs.
+ rs, err := abci.MarshalTxResults(trs)
+ require.NoError(t, err)
+ root := merkle.HashFromByteSlices(rs)
+ assert.NotEmpty(t, root)
+
+ _, proofs := merkle.ProofsFromByteSlices(rs)
+ for i, tr := range trs {
+ bz, err := tr.Marshal()
+ require.NoError(t, err)
+
+ valid := proofs[i].Verify(root, bz)
+ assert.NoError(t, valid, "%d", i)
+ }
+}
+
+func TestHashDeterministicFieldsOnly(t *testing.T) {
+ tr1 := abci.ExecTxResult{
+ Code: 1,
+ Data: []byte("transaction"),
+ Log: "nondeterministic data: abc",
+ Info: "nondeterministic data: abc",
+ GasWanted: 1000,
+ GasUsed: 1000,
+ Events: []abci.Event{},
+ Codespace: "nondeterministic.data.abc",
+ }
+ tr2 := abci.ExecTxResult{
+ Code: 1,
+ Data: []byte("transaction"),
+ Log: "nondeterministic data: def",
+ Info: "nondeterministic data: def",
+ GasWanted: 1000,
+ GasUsed: 1000,
+ Events: []abci.Event{},
+ Codespace: "nondeterministic.data.def",
+ }
+ r1, err := abci.MarshalTxResults([]*abci.ExecTxResult{&tr1})
+ require.NoError(t, err)
+ r2, err := abci.MarshalTxResults([]*abci.ExecTxResult{&tr2})
+ require.NoError(t, err)
+ require.Equal(t, merkle.HashFromByteSlices(r1), merkle.HashFromByteSlices(r2))
+}
diff --git a/abci/version/version.go b/abci/version/version.go
deleted file mode 100644
index f4dc4d2358..0000000000
--- a/abci/version/version.go
+++ /dev/null
@@ -1,9 +0,0 @@
-package version
-
-import (
- "github.com/tendermint/tendermint/version"
-)
-
-// TODO: eliminate this after some version refactor
-
-const Version = version.ABCIVersion
diff --git a/buf.gen.yaml b/buf.gen.yaml
index dc56781dd4..d972360bbd 100644
--- a/buf.gen.yaml
+++ b/buf.gen.yaml
@@ -1,13 +1,9 @@
-# The version of the generation template.
-# Required.
-# The only currently-valid value is v1beta1.
-version: v1beta1
-
-# The plugins to run.
+version: v1
plugins:
- # The name of the plugin.
- name: gogofaster
- # The the relative output directory.
- out: proto
- # Any options to provide to the plugin.
- opt: Mgoogle/protobuf/timestamp.proto=github.com/gogo/protobuf/types,Mgoogle/protobuf/duration.proto=github.com/golang/protobuf/ptypes/duration,plugins=grpc,paths=source_relative
+ out: ./proto/
+ opt:
+ - Mgoogle/protobuf/timestamp.proto=github.com/gogo/protobuf/types
+ - Mgoogle/protobuf/duration.proto=github.com/golang/protobuf/ptypes/duration
+ - plugins=grpc
+ - paths=source_relative
diff --git a/buf.work.yaml b/buf.work.yaml
new file mode 100644
index 0000000000..1878b341be
--- /dev/null
+++ b/buf.work.yaml
@@ -0,0 +1,3 @@
+version: v1
+directories:
+ - proto
diff --git a/cmd/priv_val_server/main.go b/cmd/priv_val_server/main.go
index 203b3df0dd..9014221450 100644
--- a/cmd/priv_val_server/main.go
+++ b/cmd/priv_val_server/main.go
@@ -6,10 +6,11 @@ import (
"crypto/x509"
"flag"
"fmt"
- "io/ioutil"
"net"
"net/http"
"os"
+ "os/signal"
+ "syscall"
"time"
grpc_prometheus "github.com/grpc-ecosystem/go-grpc-prometheus"
@@ -20,7 +21,6 @@ import (
"github.com/tendermint/tendermint/libs/log"
tmnet "github.com/tendermint/tendermint/libs/net"
- tmos "github.com/tendermint/tendermint/libs/os"
"github.com/tendermint/tendermint/privval"
grpcprivval "github.com/tendermint/tendermint/privval/grpc"
privvalproto "github.com/tendermint/tendermint/proto/tendermint/privval"
@@ -45,12 +45,19 @@ func main() {
keyFile = flag.String("keyfile", "", "absolute path to server key")
rootCA = flag.String("rootcafile", "", "absolute path to root CA")
prometheusAddr = flag.String("prometheus-addr", "", "address for prometheus endpoint (host:port)")
-
- logger = log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo, false).
- With("module", "priv_val")
)
flag.Parse()
+ logger, err := log.NewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo)
+ if err != nil {
+ fmt.Fprintf(os.Stderr, "failed to construct logger: %v", err)
+ os.Exit(1)
+ }
+ logger = logger.With("module", "priv_val")
+
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+
logger.Info(
"Starting private validator",
"addr", *addr,
@@ -78,7 +85,7 @@ func main() {
}
certPool := x509.NewCertPool()
- bs, err := ioutil.ReadFile(*rootCA)
+ bs, err := os.ReadFile(*rootCA)
if err != nil {
fmt.Fprintf(os.Stderr, "failed to read client ca cert: %s", err)
os.Exit(1)
@@ -106,7 +113,7 @@ func main() {
// add prometheus metrics for unary RPC calls
opts = append(opts, grpc.UnaryInterceptor(grpc_prometheus.UnaryServerInterceptor))
- ss := grpcprivval.NewSignerServer(*chainID, pv, logger)
+ ss := grpcprivval.NewSignerServer(logger, *chainID, pv)
protocol, address := tmnet.ProtocolAndAddress(*addr)
@@ -131,9 +138,10 @@ func main() {
os.Exit(1)
}
- // Stop upon receiving SIGTERM or CTRL-C.
- tmos.TrapSignal(logger, func() {
- logger.Debug("SignerServer: calling Close")
+ opctx, opcancel := signal.NotifyContext(ctx, os.Interrupt, syscall.SIGTERM)
+ defer opcancel()
+ go func() {
+ <-opctx.Done()
if *prometheusAddr != "" {
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
@@ -143,7 +151,7 @@ func main() {
}
}
s.GracefulStop()
- })
+ }()
// Run forever.
select {}
diff --git a/cmd/tenderdash/commands/completion.go b/cmd/tenderdash/commands/completion.go
new file mode 100644
index 0000000000..d2c81f0afc
--- /dev/null
+++ b/cmd/tenderdash/commands/completion.go
@@ -0,0 +1,46 @@
+package commands
+
+import (
+ "fmt"
+
+ "github.com/spf13/cobra"
+)
+
+// NewCompletionCmd returns a cobra.Command that generates bash and zsh
+// completion scripts for the given root command. If hidden is true, the
+// command will not show up in the root command's list of available commands.
+func NewCompletionCmd(rootCmd *cobra.Command, hidden bool) *cobra.Command {
+ flagZsh := "zsh"
+ cmd := &cobra.Command{
+ Use: "completion",
+ Short: "Generate shell completion scripts",
+ Long: fmt.Sprintf(`Generate Bash and Zsh completion scripts and print them to STDOUT.
+
+Once saved to file, a completion script can be loaded in the shell's
+current session as shown:
+
+ $ . <(%s completion)
+
+To configure your bash shell to load completions for each session add to
+your $HOME/.bashrc or $HOME/.profile the following instruction:
+
+ . <(%s completion)
+`, rootCmd.Use, rootCmd.Use),
+ RunE: func(cmd *cobra.Command, _ []string) error {
+ zsh, err := cmd.Flags().GetBool(flagZsh)
+ if err != nil {
+ return err
+ }
+ if zsh {
+ return rootCmd.GenZshCompletion(cmd.OutOrStdout())
+ }
+ return rootCmd.GenBashCompletion(cmd.OutOrStdout())
+ },
+ Hidden: hidden,
+ Args: cobra.NoArgs,
+ }
+
+ cmd.Flags().Bool(flagZsh, false, "Generate Zsh completion script")
+
+ return cmd
+}
diff --git a/cmd/tenderdash/commands/debug/debug.go b/cmd/tenderdash/commands/debug/debug.go
index e07f7978de..7fd5b030f7 100644
--- a/cmd/tenderdash/commands/debug/debug.go
+++ b/cmd/tenderdash/commands/debug/debug.go
@@ -6,34 +6,26 @@ import (
"github.com/tendermint/tendermint/libs/log"
)
-var (
- nodeRPCAddr string
- profAddr string
- frequency uint
-
+const (
flagNodeRPCAddr = "rpc-laddr"
flagProfAddr = "pprof-laddr"
flagFrequency = "frequency"
-
- logger = log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo, false)
)
-// DebugCmd defines the root command containing subcommands that assist in
-// debugging running Tendermint processes.
-var DebugCmd = &cobra.Command{
- Use: "debug",
- Short: "A utility to kill or watch a Tendermint process while aggregating debugging data",
-}
-
-func init() {
- DebugCmd.PersistentFlags().SortFlags = true
- DebugCmd.PersistentFlags().StringVar(
- &nodeRPCAddr,
+func GetDebugCommand(logger log.Logger) *cobra.Command {
+ cmd := &cobra.Command{
+ Use: "debug",
+ Short: "A utility to kill or watch a Tendermint process while aggregating debugging data",
+ }
+ cmd.PersistentFlags().SortFlags = true
+ cmd.PersistentFlags().String(
flagNodeRPCAddr,
"tcp://localhost:26657",
- "the Tendermint node's RPC address (:)",
+ "the Tendermint node's RPC address :)",
)
- DebugCmd.AddCommand(killCmd)
- DebugCmd.AddCommand(dumpCmd)
+ cmd.AddCommand(getKillCmd(logger))
+ cmd.AddCommand(getDumpCmd(logger))
+ return cmd
+
}
diff --git a/cmd/tenderdash/commands/debug/dump.go b/cmd/tenderdash/commands/debug/dump.go
index cb1cc942a8..d84f6e10aa 100644
--- a/cmd/tenderdash/commands/debug/dump.go
+++ b/cmd/tenderdash/commands/debug/dump.go
@@ -1,9 +1,9 @@
package debug
import (
+ "context"
"errors"
"fmt"
- "io/ioutil"
"os"
"path/filepath"
"time"
@@ -13,76 +13,102 @@ import (
"github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/libs/cli"
+ "github.com/tendermint/tendermint/libs/log"
rpchttp "github.com/tendermint/tendermint/rpc/client/http"
)
-var dumpCmd = &cobra.Command{
- Use: "dump [output-directory]",
- Short: "Continuously poll a Tendermint process and dump debugging data into a single location",
- Long: `Continuously poll a Tendermint process and dump debugging data into a single
+func getDumpCmd(logger log.Logger) *cobra.Command {
+ cmd := &cobra.Command{
+ Use: "dump [output-directory]",
+ Short: "Continuously poll a Tendermint process and dump debugging data into a single location",
+ Long: `Continuously poll a Tendermint process and dump debugging data into a single
location at a specified frequency. At each frequency interval, an archived and compressed
file will contain node debugging information including the goroutine and heap profiles
if enabled.`,
- Args: cobra.ExactArgs(1),
- RunE: dumpCmdHandler,
-}
-
-func init() {
- dumpCmd.Flags().UintVar(
- &frequency,
+ Args: cobra.ExactArgs(1),
+ RunE: func(cmd *cobra.Command, args []string) error {
+ outDir := args[0]
+ if outDir == "" {
+ return errors.New("invalid output directory")
+ }
+ frequency, err := cmd.Flags().GetUint(flagFrequency)
+ if err != nil {
+ return fmt.Errorf("flag %q not defined: %w", flagFrequency, err)
+ }
+
+ if frequency == 0 {
+ return errors.New("frequency must be positive")
+ }
+
+ nodeRPCAddr, err := cmd.Flags().GetString(flagNodeRPCAddr)
+ if err != nil {
+ return fmt.Errorf("flag %q not defined: %w", flagNodeRPCAddr, err)
+ }
+
+ profAddr, err := cmd.Flags().GetString(flagProfAddr)
+ if err != nil {
+ return fmt.Errorf("flag %q not defined: %w", flagProfAddr, err)
+ }
+
+ if _, err := os.Stat(outDir); os.IsNotExist(err) {
+ if err := os.Mkdir(outDir, os.ModePerm); err != nil {
+ return fmt.Errorf("failed to create output directory: %w", err)
+ }
+ }
+
+ rpc, err := rpchttp.New(nodeRPCAddr)
+ if err != nil {
+ return fmt.Errorf("failed to create new http client: %w", err)
+ }
+
+ ctx := cmd.Context()
+
+ home := viper.GetString(cli.HomeFlag)
+ conf := config.DefaultConfig()
+ conf = conf.SetRoot(home)
+ config.EnsureRoot(conf.RootDir)
+
+ dumpArgs := dumpDebugDataArgs{
+ conf: conf,
+ outDir: outDir,
+ profAddr: profAddr,
+ }
+ dumpDebugData(ctx, logger, rpc, dumpArgs)
+
+ ticker := time.NewTicker(time.Duration(frequency) * time.Second)
+ for range ticker.C {
+ dumpDebugData(ctx, logger, rpc, dumpArgs)
+ }
+
+ return nil
+ },
+ }
+ cmd.Flags().Uint(
flagFrequency,
30,
"the frequency (seconds) in which to poll, aggregate and dump Tendermint debug data",
)
- dumpCmd.Flags().StringVar(
- &profAddr,
+ cmd.Flags().String(
flagProfAddr,
"",
"the profiling server address (:)",
)
-}
-func dumpCmdHandler(_ *cobra.Command, args []string) error {
- outDir := args[0]
- if outDir == "" {
- return errors.New("invalid output directory")
- }
+ return cmd
- if frequency == 0 {
- return errors.New("frequency must be positive")
- }
-
- if _, err := os.Stat(outDir); os.IsNotExist(err) {
- if err := os.Mkdir(outDir, os.ModePerm); err != nil {
- return fmt.Errorf("failed to create output directory: %w", err)
- }
- }
-
- rpc, err := rpchttp.New(nodeRPCAddr)
- if err != nil {
- return fmt.Errorf("failed to create new http client: %w", err)
- }
-
- home := viper.GetString(cli.HomeFlag)
- conf := config.DefaultConfig()
- conf = conf.SetRoot(home)
- config.EnsureRoot(conf.RootDir)
-
- dumpDebugData(outDir, conf, rpc)
-
- ticker := time.NewTicker(time.Duration(frequency) * time.Second)
- for range ticker.C {
- dumpDebugData(outDir, conf, rpc)
- }
+}
- return nil
+type dumpDebugDataArgs struct {
+ conf *config.Config
+ outDir string
+ profAddr string
}
-func dumpDebugData(outDir string, conf *config.Config, rpc *rpchttp.HTTP) {
+func dumpDebugData(ctx context.Context, logger log.Logger, rpc *rpchttp.HTTP, args dumpDebugDataArgs) {
start := time.Now().UTC()
- tmpDir, err := ioutil.TempDir(outDir, "tendermint_debug_tmp")
+ tmpDir, err := os.MkdirTemp(args.outDir, "tendermint_debug_tmp")
if err != nil {
logger.Error("failed to create temporary directory", "dir", tmpDir, "error", err)
return
@@ -90,44 +116,44 @@ func dumpDebugData(outDir string, conf *config.Config, rpc *rpchttp.HTTP) {
defer os.RemoveAll(tmpDir)
logger.Info("getting node status...")
- if err := dumpStatus(rpc, tmpDir, "status.json"); err != nil {
+ if err := dumpStatus(ctx, rpc, tmpDir, "status.json"); err != nil {
logger.Error("failed to dump node status", "error", err)
return
}
logger.Info("getting node network info...")
- if err := dumpNetInfo(rpc, tmpDir, "net_info.json"); err != nil {
+ if err := dumpNetInfo(ctx, rpc, tmpDir, "net_info.json"); err != nil {
logger.Error("failed to dump node network info", "error", err)
return
}
logger.Info("getting node consensus state...")
- if err := dumpConsensusState(rpc, tmpDir, "consensus_state.json"); err != nil {
+ if err := dumpConsensusState(ctx, rpc, tmpDir, "consensus_state.json"); err != nil {
logger.Error("failed to dump node consensus state", "error", err)
return
}
logger.Info("copying node WAL...")
- if err := copyWAL(conf, tmpDir); err != nil {
+ if err := copyWAL(args.conf, tmpDir); err != nil {
logger.Error("failed to copy node WAL", "error", err)
return
}
- if profAddr != "" {
+ if args.profAddr != "" {
logger.Info("getting node goroutine profile...")
- if err := dumpProfile(tmpDir, profAddr, "goroutine", 2); err != nil {
+ if err := dumpProfile(tmpDir, args.profAddr, "goroutine", 2); err != nil {
logger.Error("failed to dump goroutine profile", "error", err)
return
}
logger.Info("getting node heap profile...")
- if err := dumpProfile(tmpDir, profAddr, "heap", 2); err != nil {
+ if err := dumpProfile(tmpDir, args.profAddr, "heap", 2); err != nil {
logger.Error("failed to dump heap profile", "error", err)
return
}
}
- outFile := filepath.Join(outDir, fmt.Sprintf("%s.zip", start.Format(time.RFC3339)))
+ outFile := filepath.Join(args.outDir, fmt.Sprintf("%s.zip", start.Format(time.RFC3339)))
if err := zipDir(tmpDir, outFile); err != nil {
logger.Error("failed to create and compress archive", "file", outFile, "error", err)
}
diff --git a/cmd/tenderdash/commands/debug/io.go b/cmd/tenderdash/commands/debug/io.go
index dcfff50c89..bf904cf5c6 100644
--- a/cmd/tenderdash/commands/debug/io.go
+++ b/cmd/tenderdash/commands/debug/io.go
@@ -5,7 +5,6 @@ import (
"encoding/json"
"fmt"
"io"
- "io/ioutil"
"os"
"path"
"path/filepath"
@@ -111,5 +110,5 @@ func writeStateJSONToFile(state interface{}, dir, filename string) error {
return fmt.Errorf("failed to encode state dump: %w", err)
}
- return ioutil.WriteFile(path.Join(dir, filename), stateJSON, os.ModePerm)
+ return os.WriteFile(path.Join(dir, filename), stateJSON, os.ModePerm)
}
diff --git a/cmd/tenderdash/commands/debug/kill.go b/cmd/tenderdash/commands/debug/kill.go
index 3e749e5131..a6c1ac7d86 100644
--- a/cmd/tenderdash/commands/debug/kill.go
+++ b/cmd/tenderdash/commands/debug/kill.go
@@ -3,7 +3,6 @@ package debug
import (
"errors"
"fmt"
- "io/ioutil"
"os"
"os/exec"
"path/filepath"
@@ -16,88 +15,96 @@ import (
"github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/libs/cli"
+ "github.com/tendermint/tendermint/libs/log"
rpchttp "github.com/tendermint/tendermint/rpc/client/http"
)
-var killCmd = &cobra.Command{
- Use: "kill [pid] [compressed-output-file]",
- Short: "Kill a Tendermint process while aggregating and packaging debugging data",
- Long: `Kill a Tendermint process while also aggregating Tendermint process data
+func getKillCmd(logger log.Logger) *cobra.Command {
+ cmd := &cobra.Command{
+ Use: "kill [pid] [compressed-output-file]",
+ Short: "Kill a Tendermint process while aggregating and packaging debugging data",
+ Long: `Kill a Tendermint process while also aggregating Tendermint process data
such as the latest node state, including consensus and networking state,
go-routine state, and the node's WAL and config information. This aggregated data
is packaged into a compressed archive.
Example:
$ tendermint debug kill 34255 /path/to/tm-debug.zip`,
- Args: cobra.ExactArgs(2),
- RunE: killCmdHandler,
-}
-
-func killCmdHandler(cmd *cobra.Command, args []string) error {
- pid, err := strconv.ParseUint(args[0], 10, 64)
- if err != nil {
- return err
- }
-
- outFile := args[1]
- if outFile == "" {
- return errors.New("invalid output file")
- }
-
- rpc, err := rpchttp.New(nodeRPCAddr)
- if err != nil {
- return fmt.Errorf("failed to create new http client: %w", err)
- }
-
- home := viper.GetString(cli.HomeFlag)
- conf := config.DefaultConfig()
- conf = conf.SetRoot(home)
- config.EnsureRoot(conf.RootDir)
-
- // Create a temporary directory which will contain all the state dumps and
- // relevant files and directories that will be compressed into a file.
- tmpDir, err := ioutil.TempDir(os.TempDir(), "tendermint_debug_tmp")
- if err != nil {
- return fmt.Errorf("failed to create temporary directory: %w", err)
- }
- defer os.RemoveAll(tmpDir)
-
- logger.Info("getting node status...")
- if err := dumpStatus(rpc, tmpDir, "status.json"); err != nil {
- return err
- }
-
- logger.Info("getting node network info...")
- if err := dumpNetInfo(rpc, tmpDir, "net_info.json"); err != nil {
- return err
- }
-
- logger.Info("getting node consensus state...")
- if err := dumpConsensusState(rpc, tmpDir, "consensus_state.json"); err != nil {
- return err
- }
-
- logger.Info("copying node WAL...")
- if err := copyWAL(conf, tmpDir); err != nil {
- if !os.IsNotExist(err) {
- return err
- }
-
- logger.Info("node WAL does not exist; continuing...")
- }
-
- logger.Info("copying node configuration...")
- if err := copyConfig(home, tmpDir); err != nil {
- return err
- }
-
- logger.Info("killing Tendermint process")
- if err := killProc(pid, tmpDir); err != nil {
- return err
+ Args: cobra.ExactArgs(2),
+ RunE: func(cmd *cobra.Command, args []string) error {
+ ctx := cmd.Context()
+ pid, err := strconv.ParseInt(args[0], 10, 64)
+ if err != nil {
+ return err
+ }
+
+ outFile := args[1]
+ if outFile == "" {
+ return errors.New("invalid output file")
+ }
+ nodeRPCAddr, err := cmd.Flags().GetString(flagNodeRPCAddr)
+ if err != nil {
+ return fmt.Errorf("flag %q not defined: %w", flagNodeRPCAddr, err)
+ }
+
+ rpc, err := rpchttp.New(nodeRPCAddr)
+ if err != nil {
+ return fmt.Errorf("failed to create new http client: %w", err)
+ }
+
+ home := viper.GetString(cli.HomeFlag)
+ conf := config.DefaultConfig()
+ conf = conf.SetRoot(home)
+ config.EnsureRoot(conf.RootDir)
+
+ // Create a temporary directory which will contain all the state dumps and
+ // relevant files and directories that will be compressed into a file.
+ tmpDir, err := os.MkdirTemp(os.TempDir(), "tendermint_debug_tmp")
+ if err != nil {
+ return fmt.Errorf("failed to create temporary directory: %w", err)
+ }
+ defer os.RemoveAll(tmpDir)
+
+ logger.Info("getting node status...")
+ if err := dumpStatus(ctx, rpc, tmpDir, "status.json"); err != nil {
+ return err
+ }
+
+ logger.Info("getting node network info...")
+ if err := dumpNetInfo(ctx, rpc, tmpDir, "net_info.json"); err != nil {
+ return err
+ }
+
+ logger.Info("getting node consensus state...")
+ if err := dumpConsensusState(ctx, rpc, tmpDir, "consensus_state.json"); err != nil {
+ return err
+ }
+
+ logger.Info("copying node WAL...")
+ if err := copyWAL(conf, tmpDir); err != nil {
+ if !os.IsNotExist(err) {
+ return err
+ }
+
+ logger.Info("node WAL does not exist; continuing...")
+ }
+
+ logger.Info("copying node configuration...")
+ if err := copyConfig(home, tmpDir); err != nil {
+ return err
+ }
+
+ logger.Info("killing Tendermint process")
+ if err := killProc(int(pid), tmpDir); err != nil {
+ return err
+ }
+
+ logger.Info("archiving and compressing debug directory...")
+ return zipDir(tmpDir, outFile)
+ },
}
- logger.Info("archiving and compressing debug directory...")
- return zipDir(tmpDir, outFile)
+ return cmd
}
// killProc attempts to kill the Tendermint process with a given PID with an
@@ -105,7 +112,7 @@ func killCmdHandler(cmd *cobra.Command, args []string) error {
// is tailed and piped to a file under the directory dir. An error is returned
// if the output file cannot be created or the tail command cannot be started.
// An error is not returned if any subsequent syscall fails.
-func killProc(pid uint64, dir string) error {
+func killProc(pid int, dir string) error {
// pipe STDERR output from tailing the Tendermint process to a file
//
// NOTE: This will only work on UNIX systems.
@@ -128,7 +135,7 @@ func killProc(pid uint64, dir string) error {
go func() {
// Killing the Tendermint process with the '-ABRT|-6' signal will result in
// a goroutine stacktrace.
- p, err := os.FindProcess(int(pid))
+ p, err := os.FindProcess(pid)
if err != nil {
fmt.Fprintf(os.Stderr, "failed to find PID to kill Tendermint process: %s", err)
} else if err = p.Signal(syscall.SIGABRT); err != nil {
diff --git a/cmd/tenderdash/commands/debug/util.go b/cmd/tenderdash/commands/debug/util.go
index fa356c4880..24626207f5 100644
--- a/cmd/tenderdash/commands/debug/util.go
+++ b/cmd/tenderdash/commands/debug/util.go
@@ -3,7 +3,7 @@ package debug
import (
"context"
"fmt"
- "io/ioutil"
+ "io"
"net/http"
"os"
"path"
@@ -15,8 +15,8 @@ import (
// dumpStatus gets node status state dump from the Tendermint RPC and writes it
// to file. It returns an error upon failure.
-func dumpStatus(rpc *rpchttp.HTTP, dir, filename string) error {
- status, err := rpc.Status(context.Background())
+func dumpStatus(ctx context.Context, rpc *rpchttp.HTTP, dir, filename string) error {
+ status, err := rpc.Status(ctx)
if err != nil {
return fmt.Errorf("failed to get node status: %w", err)
}
@@ -26,8 +26,8 @@ func dumpStatus(rpc *rpchttp.HTTP, dir, filename string) error {
// dumpNetInfo gets network information state dump from the Tendermint RPC and
// writes it to file. It returns an error upon failure.
-func dumpNetInfo(rpc *rpchttp.HTTP, dir, filename string) error {
- netInfo, err := rpc.NetInfo(context.Background())
+func dumpNetInfo(ctx context.Context, rpc *rpchttp.HTTP, dir, filename string) error {
+ netInfo, err := rpc.NetInfo(ctx)
if err != nil {
return fmt.Errorf("failed to get node network information: %w", err)
}
@@ -37,8 +37,8 @@ func dumpNetInfo(rpc *rpchttp.HTTP, dir, filename string) error {
// dumpConsensusState gets consensus state dump from the Tendermint RPC and
// writes it to file. It returns an error upon failure.
-func dumpConsensusState(rpc *rpchttp.HTTP, dir, filename string) error {
- consDump, err := rpc.DumpConsensusState(context.Background())
+func dumpConsensusState(ctx context.Context, rpc *rpchttp.HTTP, dir, filename string) error {
+ consDump, err := rpc.DumpConsensusState(ctx)
if err != nil {
return fmt.Errorf("failed to get node consensus dump: %w", err)
}
@@ -73,10 +73,10 @@ func dumpProfile(dir, addr, profile string, debug int) error {
}
defer resp.Body.Close()
- body, err := ioutil.ReadAll(resp.Body)
+ body, err := io.ReadAll(resp.Body)
if err != nil {
return fmt.Errorf("failed to read %s profile response body: %w", profile, err)
}
- return ioutil.WriteFile(path.Join(dir, fmt.Sprintf("%s.out", profile)), body, os.ModePerm)
+ return os.WriteFile(path.Join(dir, fmt.Sprintf("%s.out", profile)), body, os.ModePerm)
}
diff --git a/cmd/tenderdash/commands/gen_node_key.go b/cmd/tenderdash/commands/gen_node_key.go
index 81ea2ae70a..2a0bb758eb 100644
--- a/cmd/tenderdash/commands/gen_node_key.go
+++ b/cmd/tenderdash/commands/gen_node_key.go
@@ -1,11 +1,11 @@
package commands
import (
+ "encoding/json"
"fmt"
"github.com/spf13/cobra"
- tmjson "github.com/tendermint/tendermint/libs/json"
"github.com/tendermint/tendermint/types"
)
@@ -20,7 +20,7 @@ var GenNodeKeyCmd = &cobra.Command{
func genNodeKey(cmd *cobra.Command, args []string) error {
nodeKey := types.GenNodeKey()
- bz, err := tmjson.Marshal(nodeKey)
+ bz, err := json.Marshal(nodeKey)
if err != nil {
return fmt.Errorf("nodeKey -> json: %w", err)
}
diff --git a/cmd/tenderdash/commands/gen_validator.go b/cmd/tenderdash/commands/gen_validator.go
index 0ab74af5b7..bbe09e9127 100644
--- a/cmd/tenderdash/commands/gen_validator.go
+++ b/cmd/tenderdash/commands/gen_validator.go
@@ -1,42 +1,33 @@
package commands
import (
+ "encoding/json"
"fmt"
"github.com/spf13/cobra"
- tmjson "github.com/tendermint/tendermint/libs/json"
"github.com/tendermint/tendermint/privval"
- "github.com/tendermint/tendermint/types"
)
-var (
- keyType string
-)
-
-// GenValidatorCmd allows the generation of a keypair for a
+// MakeGenValidatorCommand allows the generation of a keypair for a
// validator.
-var GenValidatorCmd = &cobra.Command{
- Use: "gen-validator",
- Short: "Generate new validator keypair",
- RunE: genValidator,
-}
-
-func init() {
- GenValidatorCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
- "Key type to generate privval file with. Options: ed25519, secp256k1")
-}
-
-func genValidator(cmd *cobra.Command, args []string) error {
- pv := privval.GenFilePV("", "")
-
- jsbz, err := tmjson.Marshal(pv)
- if err != nil {
- return fmt.Errorf("validator -> json: %w", err)
+func MakeGenValidatorCommand() *cobra.Command {
+ cmd := &cobra.Command{
+ Use: "gen-validator",
+ Short: "Generate new validator keypair",
+ RunE: func(cmd *cobra.Command, args []string) error {
+ pv := privval.GenFilePV("", "")
+
+ jsbz, err := json.Marshal(pv)
+ if err != nil {
+ return fmt.Errorf("validator -> json: %w", err)
+ }
+
+ fmt.Printf("%v\n", string(jsbz))
+
+ return nil
+ },
}
- fmt.Printf(`%v
-`, string(jsbz))
-
- return nil
+ return cmd
}
diff --git a/cmd/tenderdash/commands/init.go b/cmd/tenderdash/commands/init.go
index 1786dd70d0..7634bdf496 100644
--- a/cmd/tenderdash/commands/init.go
+++ b/cmd/tenderdash/commands/init.go
@@ -8,57 +8,62 @@ import (
"github.com/dashevo/dashd-go/btcjson"
"github.com/spf13/cobra"
- cfg "github.com/tendermint/tendermint/config"
+
+ "github.com/tendermint/tendermint/config"
+ "github.com/tendermint/tendermint/libs/log"
tmos "github.com/tendermint/tendermint/libs/os"
tmrand "github.com/tendermint/tendermint/libs/rand"
"github.com/tendermint/tendermint/privval"
"github.com/tendermint/tendermint/types"
)
-// InitFilesCmd initializes a fresh Tendermint Core instance.
-var InitFilesCmd = &cobra.Command{
- Use: "init [full|validator|seed|single]",
- Short: "Initializes a Tenderdash node",
- ValidArgs: []string{"full", "validator", "seed", "single"},
- // We allow for zero args so we can throw a more informative error
- Args: cobra.MaximumNArgs(1),
- RunE: initFiles,
-}
-
-var (
+type nodeConfig struct {
+ *config.Config
quorumType int
coreChainLockedHeight uint32
initChainInitialHeight int64
appHash []byte
proTxHash []byte
-)
-
-func AddInitFlags(cmd *cobra.Command) {
- cmd.Flags().IntVar(&quorumType, "quorumType", 0, "Quorum Type")
- cmd.Flags().Uint32Var(&coreChainLockedHeight, "coreChainLockedHeight", 1, "Initial Core Chain Locked Height")
- cmd.Flags().Int64Var(&initChainInitialHeight, "initialHeight", 0, "Initial Height")
- cmd.Flags().BytesHexVar(&proTxHash, "proTxHash", []byte(nil), "Node pro tx hash")
- cmd.Flags().BytesHexVar(&appHash, "appHash", []byte(nil), "App hash")
}
-func initFiles(cmd *cobra.Command, args []string) error {
- if len(args) == 0 {
- return errors.New("must specify a node type: tendermint init [validator|full|seed|single]")
+// MakeInitFilesCommand returns the command to initialize a fresh Tendermint Core instance.
+func MakeInitFilesCommand(conf *config.Config, logger log.Logger) *cobra.Command {
+ nodeConf := nodeConfig{Config: conf}
+
+ cmd := &cobra.Command{
+ Use: "init [full|validator|seed]",
+ Short: "Initializes a Tenderdash node",
+ ValidArgs: []string{"full", "validator", "seed"},
+ // We allow for zero args so we can throw a more informative error
+ Args: cobra.MaximumNArgs(1),
+ RunE: func(cmd *cobra.Command, args []string) error {
+ if len(args) == 0 {
+ return errors.New("must specify a node type: tendermint init [validator|full|seed]")
+ }
+ nodeConf.Mode = args[0]
+ return initFilesWithConfig(cmd.Context(), nodeConf, logger)
+ },
}
- config.Mode = args[0]
- return initFilesWithConfig(config)
+
+ cmd.Flags().IntVar(&nodeConf.quorumType, "quorumType", 0, "Quorum Type")
+ cmd.Flags().Uint32Var(&nodeConf.coreChainLockedHeight, "coreChainLockedHeight", 1, "Initial Core Chain Locked Height")
+ cmd.Flags().Int64Var(&nodeConf.initChainInitialHeight, "initialHeight", 0, "Initial Height")
+ cmd.Flags().BytesHexVar(&nodeConf.proTxHash, "proTxHash", []byte(nil), "Node pro tx hash")
+ cmd.Flags().BytesHexVar(&nodeConf.appHash, "appHash", []byte(nil), "App hash")
+
+ return cmd
}
-func initFilesWithConfig(config *cfg.Config) error {
+func initFilesWithConfig(ctx context.Context, conf nodeConfig, logger log.Logger) error {
var (
pv *privval.FilePV
err error
)
- if config.Mode == cfg.ModeValidator {
+ if conf.Mode == config.ModeValidator {
// private validator
- privValKeyFile := config.PrivValidator.KeyFile()
- privValStateFile := config.PrivValidator.StateFile()
+ privValKeyFile := conf.PrivValidator.KeyFile()
+ privValStateFile := conf.PrivValidator.StateFile()
if tmos.FileExists(privValKeyFile) {
pv, err = privval.LoadFilePV(privValKeyFile, privValStateFile)
if err != nil {
@@ -72,13 +77,15 @@ func initFilesWithConfig(config *cfg.Config) error {
if err != nil {
return err
}
- pv.Save()
+ if err := pv.Save(); err != nil {
+ return err
+ }
logger.Info("Generated private validator", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
}
}
- nodeKeyFile := config.NodeKeyFile()
+ nodeKeyFile := conf.NodeKeyFile()
if tmos.FileExists(nodeKeyFile) {
logger.Info("Found node key", "path", nodeKeyFile)
} else {
@@ -89,7 +96,7 @@ func initFilesWithConfig(config *cfg.Config) error {
}
// genesis file
- genFile := config.GenesisFile()
+ genFile := conf.GenesisFile()
if tmos.FileExists(genFile) {
logger.Info("Found genesis file", "path", genFile)
} else {
@@ -98,13 +105,13 @@ func initFilesWithConfig(config *cfg.Config) error {
ChainID: fmt.Sprintf("test-chain-%v", tmrand.Str(6)),
GenesisTime: time.Now(),
ConsensusParams: types.DefaultConsensusParams(),
- QuorumType: btcjson.LLMQType(quorumType),
- InitialCoreChainLockedHeight: coreChainLockedHeight,
- InitialHeight: initChainInitialHeight,
- AppHash: appHash,
+ QuorumType: btcjson.LLMQType(conf.quorumType),
+ InitialCoreChainLockedHeight: conf.coreChainLockedHeight,
+ InitialHeight: conf.initChainInitialHeight,
+ AppHash: conf.appHash,
}
- ctx, cancel := context.WithTimeout(context.TODO(), ctxTimeout)
+ ctx, cancel := context.WithTimeout(ctx, ctxTimeout)
defer cancel()
// if this is a validator we add it to genesis
@@ -139,10 +146,10 @@ func initFilesWithConfig(config *cfg.Config) error {
}
// write config file
- if err := cfg.WriteConfigFile(config.RootDir, config); err != nil {
+ if err := config.WriteConfigFile(conf.RootDir, conf.Config); err != nil {
return err
}
- logger.Info("Generated config", "mode", config.Mode)
+ logger.Info("Generated config", "mode", conf.Mode)
return nil
}
diff --git a/cmd/tenderdash/commands/inspect.go b/cmd/tenderdash/commands/inspect.go
index 4f5ce2eccf..9c12ef5cf6 100644
--- a/cmd/tenderdash/commands/inspect.go
+++ b/cmd/tenderdash/commands/inspect.go
@@ -1,21 +1,22 @@
package commands
import (
- "context"
- "os"
"os/signal"
"syscall"
"github.com/spf13/cobra"
+ "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/internal/inspect"
+ "github.com/tendermint/tendermint/libs/log"
)
-// InspectCmd is the command for starting an inspect server.
-var InspectCmd = &cobra.Command{
- Use: "inspect",
- Short: "Run an inspect server for investigating Tendermint state",
- Long: `
+// InspectCmd constructs the command to start an inspect server.
+func MakeInspectCommand(conf *config.Config, logger log.Logger) *cobra.Command {
+ cmd := &cobra.Command{
+ Use: "inspect",
+ Short: "Run an inspect server for investigating Tendermint state",
+ Long: `
inspect runs a subset of Tendermint's RPC endpoints that are useful for debugging
issues with Tendermint.
@@ -24,40 +25,27 @@ var InspectCmd = &cobra.Command{
The inspect command can be used to query the block and state store using Tendermint
RPC calls to debug issues of inconsistent state.
`,
-
- RunE: runInspect,
-}
-
-func init() {
- InspectCmd.Flags().
- String("rpc.laddr",
- config.RPC.ListenAddress, "RPC listenener address. Port required")
- InspectCmd.Flags().
- String("db-backend",
- config.DBBackend, "database backend: goleveldb | cleveldb | boltdb | rocksdb | badgerdb")
- InspectCmd.Flags().
- String("db-dir", config.DBPath, "database directory")
-}
-
-func runInspect(cmd *cobra.Command, args []string) error {
- ctx, cancel := context.WithCancel(cmd.Context())
- defer cancel()
-
- c := make(chan os.Signal, 1)
- signal.Notify(c, syscall.SIGTERM, syscall.SIGINT)
- go func() {
- <-c
- cancel()
- }()
-
- ins, err := inspect.NewFromConfig(logger, config)
- if err != nil {
- return err
+ RunE: func(cmd *cobra.Command, args []string) error {
+ ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM, syscall.SIGINT)
+ defer cancel()
+
+ ins, err := inspect.NewFromConfig(logger, conf)
+ if err != nil {
+ return err
+ }
+
+ logger.Info("starting inspect server")
+ if err := ins.Run(ctx); err != nil {
+ return err
+ }
+ return nil
+ },
}
+ cmd.Flags().String("rpc.laddr",
+ conf.RPC.ListenAddress, "RPC listenener address. Port required")
+ cmd.Flags().String("db-backend",
+ conf.DBBackend, "database backend: goleveldb | cleveldb | boltdb | rocksdb | badgerdb")
+ cmd.Flags().String("db-dir", conf.DBPath, "database directory")
- logger.Info("starting inspect server")
- if err := ins.Run(ctx); err != nil {
- return err
- }
- return nil
+ return cmd
}
diff --git a/cmd/tenderdash/commands/key_migrate.go b/cmd/tenderdash/commands/key_migrate.go
index 739af4a7d1..5866be341b 100644
--- a/cmd/tenderdash/commands/key_migrate.go
+++ b/cmd/tenderdash/commands/key_migrate.go
@@ -5,11 +5,14 @@ import (
"fmt"
"github.com/spf13/cobra"
+
cfg "github.com/tendermint/tendermint/config"
+ "github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/scripts/keymigrate"
+ "github.com/tendermint/tendermint/scripts/scmigrate"
)
-func MakeKeyMigrateCommand() *cobra.Command {
+func MakeKeyMigrateCommand(conf *cfg.Config, logger log.Logger) *cobra.Command {
cmd := &cobra.Command{
Use: "key-migrate",
Short: "Run Database key migration",
@@ -38,7 +41,7 @@ func MakeKeyMigrateCommand() *cobra.Command {
db, err := cfg.DefaultDBProvider(&cfg.DBContext{
ID: dbctx,
- Config: config,
+ Config: conf,
})
if err != nil {
@@ -49,6 +52,13 @@ func MakeKeyMigrateCommand() *cobra.Command {
return fmt.Errorf("running migration for context %q: %w",
dbctx, err)
}
+
+ if dbctx == "blockstore" {
+ if err := scmigrate.Migrate(ctx, db); err != nil {
+ return fmt.Errorf("running seen commit migration: %w", err)
+
+ }
+ }
}
logger.Info("completed database migration successfully")
@@ -58,7 +68,7 @@ func MakeKeyMigrateCommand() *cobra.Command {
}
// allow database info to be overridden via cli
- addDBFlags(cmd)
+ addDBFlags(cmd, conf)
return cmd
}
diff --git a/cmd/tenderdash/commands/light.go b/cmd/tenderdash/commands/light.go
index 6b01e9c412..5b37c6bd32 100644
--- a/cmd/tenderdash/commands/light.go
+++ b/cmd/tenderdash/commands/light.go
@@ -1,22 +1,22 @@
package commands
import (
- "context"
"errors"
"fmt"
"net/http"
"os"
+ "os/signal"
"path/filepath"
"strings"
+ "syscall"
"time"
- dashcore "github.com/tendermint/tendermint/dashcore/rpc"
-
"github.com/spf13/cobra"
dbm "github.com/tendermint/tm-db"
+ "github.com/tendermint/tendermint/config"
+ dashcore "github.com/tendermint/tendermint/dash/core"
"github.com/tendermint/tendermint/libs/log"
- tmos "github.com/tendermint/tendermint/libs/os"
"github.com/tendermint/tendermint/light"
lproxy "github.com/tendermint/tendermint/light/proxy"
lrpc "github.com/tendermint/tendermint/light/rpc"
@@ -24,11 +24,56 @@ import (
rpcserver "github.com/tendermint/tendermint/rpc/jsonrpc/server"
)
-// LightCmd represents the base command when called without any subcommands
-var LightCmd = &cobra.Command{
- Use: "light [chainID]",
- Short: "Run a light client proxy server, verifying Tendermint rpc",
- Long: `Run a light client proxy server, verifying Tendermint rpc.
+// LightCmd constructs the base command called when invoked without any subcommands.
+func MakeLightCommand(conf *config.Config, logger log.Logger) *cobra.Command {
+ var (
+ listenAddr string
+ primaryAddr string
+ witnessAddrsJoined string
+ chainID string
+ dir string
+ maxOpenConnections int
+
+ logLevel string
+ logFormat string
+
+ primaryKey = []byte("primary")
+ witnessesKey = []byte("witnesses")
+
+ dashCoreRPCHost string
+ dashCoreRPCUser string
+ dashCoreRPCPass string
+ )
+
+ checkForExistingProviders := func(db dbm.DB) (string, []string, error) {
+ primaryBytes, err := db.Get(primaryKey)
+ if err != nil {
+ return "", []string{""}, err
+ }
+ witnessesBytes, err := db.Get(witnessesKey)
+ if err != nil {
+ return "", []string{""}, err
+ }
+ witnessesAddrs := strings.Split(string(witnessesBytes), ",")
+ return string(primaryBytes), witnessesAddrs, nil
+ }
+
+ saveProviders := func(db dbm.DB, primaryAddr, witnessesAddrs string) error {
+ err := db.Set(primaryKey, []byte(primaryAddr))
+ if err != nil {
+ return fmt.Errorf("failed to save primary provider: %w", err)
+ }
+ err = db.Set(witnessesKey, []byte(witnessesAddrs))
+ if err != nil {
+ return fmt.Errorf("failed to save witness providers: %w", err)
+ }
+ return nil
+ }
+
+ cmd := &cobra.Command{
+ Use: "light [chainID]",
+ Short: "Run a light client proxy server, verifying Tendermint rpc",
+ Long: `Run a light client proxy server, verifying Tendermint rpc.
All calls that can be tracked back to a block header by a proof
will be verified before passing them back to the caller. Other than
@@ -38,6 +83,8 @@ Furthermore to the chainID, a fresh instance of a light client will
need a primary RPC address and witness RPC addresses. To restart the node, thereafter
only the chainID is required.
+To restart the node, thereafter only the chainID is required.
+
When /abci_query is called, the Merkle key path format is:
/{store name}/{key}
@@ -45,167 +92,136 @@ When /abci_query is called, the Merkle key path format is:
Please verify with your application that this Merkle key format is used (true
for applications built w/ Cosmos SDK).
`,
- RunE: runProxy,
- Args: cobra.ExactArgs(1),
- Example: `light cosmoshub-3 -p http://52.57.29.196:26657 -w http://public-seed-node.cosmoshub.certus.one:26657
+ RunE: func(cmd *cobra.Command, args []string) error {
+ chainID = args[0]
+ logger.Info("Creating client...", "chainID", chainID)
+
+ var witnessesAddrs []string
+ if witnessAddrsJoined != "" {
+ witnessesAddrs = strings.Split(witnessAddrsJoined, ",")
+ }
+
+ lightDB, err := dbm.NewGoLevelDB("light-client-db", dir)
+ if err != nil {
+ return fmt.Errorf("can't create a db: %w", err)
+ }
+ // create a prefixed db on the chainID
+ db := dbm.NewPrefixDB(lightDB, []byte(chainID))
+
+ if primaryAddr == "" { // check to see if we can start from an existing state
+ var err error
+ primaryAddr, witnessesAddrs, err = checkForExistingProviders(db)
+ if err != nil {
+ return fmt.Errorf("failed to retrieve primary or witness from db: %w", err)
+ }
+ if primaryAddr == "" {
+ return errors.New("no primary address was provided nor found. Please provide a primary (using -p)." +
+ " Run the command: tendermint light --help for more information")
+ }
+ } else {
+ err := saveProviders(db, primaryAddr, witnessAddrsJoined)
+ if err != nil {
+ logger.Error("Unable to save primary and or witness addresses", "err", err)
+ }
+ }
+ if primaryAddr == "" { // check to see if we can start from an existing state
+ var err error
+ primaryAddr, witnessesAddrs, err = checkForExistingProviders(db)
+ if err != nil {
+ return fmt.Errorf("failed to retrieve primary or witness from db: %w", err)
+ }
+ if primaryAddr == "" {
+ return errors.New(
+ "no primary address was provided nor found. Please provide a primary (using -p)." +
+ " Run the command: tendermint light --help for more information",
+ )
+ }
+ } else {
+ err := saveProviders(db, primaryAddr, witnessAddrsJoined)
+ if err != nil {
+ logger.Error("Unable to save primary and or witness addresses", "err", err)
+ }
+ }
+
+ options := []light.Option{
+ light.Logger(logger),
+ light.DashCoreVerification(),
+ }
+
+ rpcLogger := logger.With("module", dashcore.ModuleName)
+ dashCoreRPCClient, _ := dashcore.NewRPCClient(dashCoreRPCHost, dashCoreRPCUser, dashCoreRPCPass, rpcLogger)
+
+ c, err := light.NewHTTPClient(
+ cmd.Context(),
+ chainID,
+ primaryAddr,
+ witnessesAddrs,
+ dbs.New(db),
+ dashCoreRPCClient,
+ options...,
+ )
+ if err != nil {
+ return err
+ }
+
+ cfg := rpcserver.DefaultConfig()
+ cfg.MaxBodyBytes = conf.RPC.MaxBodyBytes
+ cfg.MaxHeaderBytes = conf.RPC.MaxHeaderBytes
+ cfg.MaxOpenConnections = maxOpenConnections
+ // If necessary adjust global WriteTimeout to ensure it's greater than
+ // TimeoutBroadcastTxCommit.
+ // See https://github.com/tendermint/tendermint/issues/3435
+ if cfg.WriteTimeout <= conf.RPC.TimeoutBroadcastTxCommit {
+ cfg.WriteTimeout = conf.RPC.TimeoutBroadcastTxCommit + 1*time.Second
+ }
+
+ p, err := lproxy.NewProxy(c, listenAddr, primaryAddr, cfg, logger, lrpc.KeyPathFn(lrpc.DefaultMerkleKeyPathFn()))
+ if err != nil {
+ return err
+ }
+
+ ctx, cancel := signal.NotifyContext(cmd.Context(), os.Interrupt, syscall.SIGTERM)
+ defer cancel()
+
+ go func() {
+ <-ctx.Done()
+ p.Listener.Close()
+ }()
+
+ logger.Info("Starting proxy...", "laddr", listenAddr)
+ if err := p.ListenAndServe(ctx); err != http.ErrServerClosed {
+ // Error starting or closing listener:
+ logger.Error("proxy ListenAndServe", "err", err)
+ }
+
+ return nil
+ },
+ Args: cobra.ExactArgs(1),
+ Example: `light cosmoshub-3 -p http://52.57.29.196:26657 -w http://public-seed-node.cosmoshub.certus.one:26657
--height 962118 --hash 28B97BE9F6DE51AC69F70E0B7BFD7E5C9CD1A595B7DC31AFF27C50D4948020CD`,
-}
-
-var (
- listenAddr string
- primaryAddr string
- witnessAddrsJoined string
- chainID string
- dir string
- maxOpenConnections int
-
- logLevel string
- logFormat string
-
- primaryKey = []byte("primary")
- witnessesKey = []byte("witnesses")
-
- dashCoreRPCHost string
- dashCoreRPCUser string
- dashCoreRPCPass string
-)
+ }
-func init() {
- LightCmd.Flags().StringVar(&listenAddr, "laddr", "tcp://localhost:8888",
+ cmd.Flags().StringVar(&listenAddr, "laddr", "tcp://localhost:8888",
"serve the proxy on the given address")
- LightCmd.Flags().StringVarP(&primaryAddr, "primary", "p", "",
+ cmd.Flags().StringVarP(&primaryAddr, "primary", "p", "",
"connect to a Tendermint node at this address")
- LightCmd.Flags().StringVarP(&witnessAddrsJoined, "witnesses", "w", "",
+ cmd.Flags().StringVarP(&witnessAddrsJoined, "witnesses", "w", "",
"tendermint nodes to cross-check the primary node, comma-separated")
- LightCmd.Flags().StringVarP(&dir, "dir", "d", os.ExpandEnv(filepath.Join("$HOME", ".tendermint-light")),
+ cmd.Flags().StringVarP(&dir, "dir", "d", os.ExpandEnv(filepath.Join("$HOME", ".tendermint-light")),
"specify the directory")
- LightCmd.Flags().IntVar(
+ cmd.Flags().IntVar(
&maxOpenConnections,
"max-open-connections",
900,
"maximum number of simultaneous connections (including WebSocket).")
- LightCmd.Flags().StringVar(&logLevel, "log-level", log.LogLevelInfo, "The logging level (debug|info|warn|error|fatal)")
- LightCmd.Flags().StringVar(&logFormat, "log-format", log.LogFormatPlain, "The logging format (text|json)")
- LightCmd.Flags().StringVar(&dashCoreRPCHost, "dchost", "",
+ cmd.Flags().StringVar(&logLevel, "log-level", log.LogLevelInfo, "The logging level (debug|info|warn|error|fatal)")
+ cmd.Flags().StringVar(&logFormat, "log-format", log.LogFormatPlain, "The logging format (text|json)")
+ cmd.Flags().StringVar(&dashCoreRPCHost, "dchost", "",
"host address of the Dash Core RPC node")
- LightCmd.Flags().StringVar(&dashCoreRPCHost, "dcuser", "",
+ cmd.Flags().StringVar(&dashCoreRPCHost, "dcuser", "",
"Dash Core RPC node user")
- LightCmd.Flags().StringVar(&dashCoreRPCHost, "dcpass", "",
+ cmd.Flags().StringVar(&dashCoreRPCHost, "dcpass", "",
"Dash Core RPC node password")
-}
-
-func runProxy(cmd *cobra.Command, args []string) error {
- logger, err := log.NewDefaultLogger(logFormat, logLevel, false)
- if err != nil {
- return err
- }
-
- chainID = args[0]
- logger.Info("Creating client...", "chainID", chainID)
-
- witnessesAddrs := []string{}
- if witnessAddrsJoined != "" {
- witnessesAddrs = strings.Split(witnessAddrsJoined, ",")
- }
- lightDB, err := dbm.NewGoLevelDB("light-client-db", dir)
- if err != nil {
- return fmt.Errorf("can't create a db: %w", err)
- }
- // create a prefixed db on the chainID
- db := dbm.NewPrefixDB(lightDB, []byte(chainID))
-
- if primaryAddr == "" { // check to see if we can start from an existing state
- var err error
- primaryAddr, witnessesAddrs, err = checkForExistingProviders(db)
- if err != nil {
- return fmt.Errorf("failed to retrieve primary or witness from db: %w", err)
- }
- if primaryAddr == "" {
- return errors.New(
- "no primary address was provided nor found. Please provide a primary (using -p)." +
- " Run the command: tendermint light --help for more information",
- )
- }
- } else {
- err := saveProviders(db, primaryAddr, witnessAddrsJoined)
- if err != nil {
- logger.Error("Unable to save primary and or witness addresses", "err", err)
- }
- }
-
- options := []light.Option{
- light.Logger(logger),
- light.DashCoreVerification(),
- }
-
- rpcLogger := logger.With("module", dashcore.ModuleName)
- dashCoreRPCClient, _ := dashcore.NewRPCClient(dashCoreRPCHost, dashCoreRPCUser, dashCoreRPCPass, rpcLogger)
-
- c, err := light.NewHTTPClient(
- context.Background(),
- chainID,
- primaryAddr,
- witnessesAddrs,
- dbs.New(db),
- dashCoreRPCClient,
- options...,
- )
- if err != nil {
- return err
- }
-
- cfg := rpcserver.DefaultConfig()
- cfg.MaxBodyBytes = config.RPC.MaxBodyBytes
- cfg.MaxHeaderBytes = config.RPC.MaxHeaderBytes
- cfg.MaxOpenConnections = maxOpenConnections
- // If necessary adjust global WriteTimeout to ensure it's greater than
- // TimeoutBroadcastTxCommit.
- // See https://github.com/tendermint/tendermint/issues/3435
- if cfg.WriteTimeout <= config.RPC.TimeoutBroadcastTxCommit {
- cfg.WriteTimeout = config.RPC.TimeoutBroadcastTxCommit + 1*time.Second
- }
-
- p, err := lproxy.NewProxy(c, listenAddr, primaryAddr, cfg, logger, lrpc.KeyPathFn(lrpc.DefaultMerkleKeyPathFn()))
- if err != nil {
- return err
- }
-
- // Stop upon receiving SIGTERM or CTRL-C.
- tmos.TrapSignal(logger, func() {
- p.Listener.Close()
- })
-
- logger.Info("Starting proxy...", "laddr", listenAddr)
- if err := p.ListenAndServe(); err != http.ErrServerClosed {
- // Error starting or closing listener:
- logger.Error("proxy ListenAndServe", "err", err)
- }
-
- return nil
-}
-
-func checkForExistingProviders(db dbm.DB) (string, []string, error) {
- primaryBytes, err := db.Get(primaryKey)
- if err != nil {
- return "", []string{""}, err
- }
- witnessesBytes, err := db.Get(witnessesKey)
- if err != nil {
- return "", []string{""}, err
- }
- witnessesAddrs := strings.Split(string(witnessesBytes), ",")
- return string(primaryBytes), witnessesAddrs, nil
-}
-
-func saveProviders(db dbm.DB, primaryAddr, witnessesAddrs string) error {
- err := db.Set(primaryKey, []byte(primaryAddr))
- if err != nil {
- return fmt.Errorf("failed to save primary provider: %w", err)
- }
- err = db.Set(witnessesKey, []byte(witnessesAddrs))
- if err != nil {
- return fmt.Errorf("failed to save witness providers: %w", err)
- }
- return nil
+ return cmd
}
diff --git a/cmd/tenderdash/commands/probe_upnp.go b/cmd/tenderdash/commands/probe_upnp.go
deleted file mode 100644
index 4c71e099a4..0000000000
--- a/cmd/tenderdash/commands/probe_upnp.go
+++ /dev/null
@@ -1,32 +0,0 @@
-package commands
-
-import (
- "fmt"
-
- "github.com/spf13/cobra"
-
- "github.com/tendermint/tendermint/internal/p2p/upnp"
- tmjson "github.com/tendermint/tendermint/libs/json"
-)
-
-// ProbeUpnpCmd adds capabilities to test the UPnP functionality.
-var ProbeUpnpCmd = &cobra.Command{
- Use: "probe-upnp",
- Short: "Test UPnP functionality",
- RunE: probeUpnp,
-}
-
-func probeUpnp(cmd *cobra.Command, args []string) error {
- capabilities, err := upnp.Probe(logger)
- if err != nil {
- fmt.Println("Probe failed: ", err)
- } else {
- fmt.Println("Probe success!")
- jsonBytes, err := tmjson.Marshal(capabilities)
- if err != nil {
- return err
- }
- fmt.Println(string(jsonBytes))
- }
- return nil
-}
diff --git a/cmd/tenderdash/commands/reindex_event.go b/cmd/tenderdash/commands/reindex_event.go
index bd95779635..6cec32738a 100644
--- a/cmd/tenderdash/commands/reindex_event.go
+++ b/cmd/tenderdash/commands/reindex_event.go
@@ -17,6 +17,7 @@ import (
"github.com/tendermint/tendermint/internal/state/indexer/sink/kv"
"github.com/tendermint/tendermint/internal/state/indexer/sink/psql"
"github.com/tendermint/tendermint/internal/store"
+ "github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/os"
"github.com/tendermint/tendermint/rpc/coretypes"
"github.com/tendermint/tendermint/types"
@@ -26,59 +27,68 @@ const (
reindexFailed = "event re-index failed: "
)
-// ReIndexEventCmd allows re-index the event by given block height interval
-var ReIndexEventCmd = &cobra.Command{
- Use: "reindex-event",
- Short: "reindex events to the event store backends",
- Long: `
+// MakeReindexEventCommand constructs a command to re-index events in a block height interval.
+func MakeReindexEventCommand(conf *tmcfg.Config, logger log.Logger) *cobra.Command {
+ var (
+ startHeight int64
+ endHeight int64
+ )
+
+ cmd := &cobra.Command{
+ Use: "reindex-event",
+ Short: "reindex events to the event store backends",
+ Long: `
reindex-event is an offline tooling to re-index block and tx events to the eventsinks,
-you can run this command when the event store backend dropped/disconnected or you want to
-replace the backend. The default start-height is 0, meaning the tooling will start
-reindex from the base block height(inclusive); and the default end-height is 0, meaning
+you can run this command when the event store backend dropped/disconnected or you want to
+replace the backend. The default start-height is 0, meaning the tooling will start
+reindex from the base block height(inclusive); and the default end-height is 0, meaning
the tooling will reindex until the latest block height(inclusive). User can omit
either or both arguments.
`,
- Example: `
+ Example: `
tendermint reindex-event
tendermint reindex-event --start-height 2
tendermint reindex-event --end-height 10
tendermint reindex-event --start-height 2 --end-height 10
`,
- Run: func(cmd *cobra.Command, args []string) {
- bs, ss, err := loadStateAndBlockStore(config)
- if err != nil {
- fmt.Println(reindexFailed, err)
- return
- }
-
- if err := checkValidHeight(bs); err != nil {
- fmt.Println(reindexFailed, err)
- return
- }
+ RunE: func(cmd *cobra.Command, args []string) error {
+ bs, ss, err := loadStateAndBlockStore(conf)
+ if err != nil {
+ return fmt.Errorf("%s: %w", reindexFailed, err)
+ }
- es, err := loadEventSinks(config)
- if err != nil {
- fmt.Println(reindexFailed, err)
- return
- }
+ cvhArgs := checkValidHeightArgs{
+ startHeight: startHeight,
+ endHeight: endHeight,
+ }
+ if err := checkValidHeight(bs, cvhArgs); err != nil {
+ return fmt.Errorf("%s: %w", reindexFailed, err)
+ }
- if err = eventReIndex(cmd, es, bs, ss); err != nil {
- fmt.Println(reindexFailed, err)
- return
- }
+ es, err := loadEventSinks(conf)
+ if err != nil {
+ return fmt.Errorf("%s: %w", reindexFailed, err)
+ }
- fmt.Println("event re-index finished")
- },
-}
+ riArgs := eventReIndexArgs{
+ startHeight: startHeight,
+ endHeight: endHeight,
+ sinks: es,
+ blockStore: bs,
+ stateStore: ss,
+ }
+ if err := eventReIndex(cmd, riArgs); err != nil {
+ return fmt.Errorf("%s: %w", reindexFailed, err)
+ }
-var (
- startHeight int64
- endHeight int64
-)
+ logger.Info("event re-index finished")
+ return nil
+ },
+ }
-func init() {
- ReIndexEventCmd.Flags().Int64Var(&startHeight, "start-height", 0, "the block height would like to start for re-index")
- ReIndexEventCmd.Flags().Int64Var(&endHeight, "end-height", 0, "the block height would like to finish for re-index")
+ cmd.Flags().Int64Var(&startHeight, "start-height", 0, "the block height would like to start for re-index")
+ cmd.Flags().Int64Var(&endHeight, "end-height", 0, "the block height would like to finish for re-index")
+ return cmd
}
func loadEventSinks(cfg *tmcfg.Config) ([]indexer.EventSink, error) {
@@ -109,7 +119,7 @@ func loadEventSinks(cfg *tmcfg.Config) ([]indexer.EventSink, error) {
if conn == "" {
return nil, errors.New("the psql connection settings cannot be empty")
}
- es, err := psql.NewEventSink(conn, chainID)
+ es, err := psql.NewEventSink(conn, cfg.ChainID())
if err != nil {
return nil, err
}
@@ -159,52 +169,58 @@ func loadStateAndBlockStore(cfg *tmcfg.Config) (*store.BlockStore, state.Store,
return blockStore, stateStore, nil
}
-func eventReIndex(cmd *cobra.Command, es []indexer.EventSink, bs state.BlockStore, ss state.Store) error {
+type eventReIndexArgs struct {
+ startHeight int64
+ endHeight int64
+ sinks []indexer.EventSink
+ blockStore state.BlockStore
+ stateStore state.Store
+}
+func eventReIndex(cmd *cobra.Command, args eventReIndexArgs) error {
var bar progressbar.Bar
- bar.NewOption(startHeight-1, endHeight)
+ bar.NewOption(args.startHeight-1, args.endHeight)
fmt.Println("start re-indexing events:")
defer bar.Finish()
- for i := startHeight; i <= endHeight; i++ {
+ for i := args.startHeight; i <= args.endHeight; i++ {
select {
case <-cmd.Context().Done():
return fmt.Errorf("event re-index terminated at height %d: %w", i, cmd.Context().Err())
default:
- b := bs.LoadBlock(i)
+ b := args.blockStore.LoadBlock(i)
if b == nil {
return fmt.Errorf("not able to load block at height %d from the blockstore", i)
}
- r, err := ss.LoadABCIResponses(i)
+ r, err := args.stateStore.LoadABCIResponses(i)
if err != nil {
return fmt.Errorf("not able to load ABCI Response at height %d from the statestore", i)
}
e := types.EventDataNewBlockHeader{
- Header: b.Header,
- NumTxs: int64(len(b.Txs)),
- ResultBeginBlock: *r.BeginBlock,
- ResultEndBlock: *r.EndBlock,
+ Header: b.Header,
+ NumTxs: int64(len(b.Txs)),
+ ResultFinalizeBlock: *r.FinalizeBlock,
}
var batch *indexer.Batch
if e.NumTxs > 0 {
batch = indexer.NewBatch(e.NumTxs)
- for i, tx := range b.Data.Txs {
+ for i := range b.Data.Txs {
tr := abcitypes.TxResult{
Height: b.Height,
Index: uint32(i),
- Tx: tx,
- Result: *(r.DeliverTxs[i]),
+ Tx: b.Data.Txs[i],
+ Result: *(r.FinalizeBlock.TxResults[i]),
}
_ = batch.Add(&tr)
}
}
- for _, sink := range es {
+ for _, sink := range args.sinks {
if err := sink.IndexBlockEvents(e); err != nil {
return fmt.Errorf("block event re-index at height %d failed: %w", i, err)
}
@@ -223,40 +239,45 @@ func eventReIndex(cmd *cobra.Command, es []indexer.EventSink, bs state.BlockStor
return nil
}
-func checkValidHeight(bs state.BlockStore) error {
+type checkValidHeightArgs struct {
+ startHeight int64
+ endHeight int64
+}
+
+func checkValidHeight(bs state.BlockStore, args checkValidHeightArgs) error {
base := bs.Base()
- if startHeight == 0 {
- startHeight = base
+ if args.startHeight == 0 {
+ args.startHeight = base
fmt.Printf("set the start block height to the base height of the blockstore %d \n", base)
}
- if startHeight < base {
+ if args.startHeight < base {
return fmt.Errorf("%s (requested start height: %d, base height: %d)",
- coretypes.ErrHeightNotAvailable, startHeight, base)
+ coretypes.ErrHeightNotAvailable, args.startHeight, base)
}
height := bs.Height()
- if startHeight > height {
+ if args.startHeight > height {
return fmt.Errorf(
- "%s (requested start height: %d, store height: %d)", coretypes.ErrHeightNotAvailable, startHeight, height)
+ "%s (requested start height: %d, store height: %d)", coretypes.ErrHeightNotAvailable, args.startHeight, height)
}
- if endHeight == 0 || endHeight > height {
- endHeight = height
+ if args.endHeight == 0 || args.endHeight > height {
+ args.endHeight = height
fmt.Printf("set the end block height to the latest height of the blockstore %d \n", height)
}
- if endHeight < base {
+ if args.endHeight < base {
return fmt.Errorf(
- "%s (requested end height: %d, base height: %d)", coretypes.ErrHeightNotAvailable, endHeight, base)
+ "%s (requested end height: %d, base height: %d)", coretypes.ErrHeightNotAvailable, args.endHeight, base)
}
- if endHeight < startHeight {
+ if args.endHeight < args.startHeight {
return fmt.Errorf(
"%s (requested the end height: %d is less than the start height: %d)",
- coretypes.ErrInvalidRequest, startHeight, endHeight)
+ coretypes.ErrInvalidRequest, args.startHeight, args.endHeight)
}
return nil
diff --git a/cmd/tenderdash/commands/reindex_event_test.go b/cmd/tenderdash/commands/reindex_event_test.go
index 2008251bc1..f60fe1b04e 100644
--- a/cmd/tenderdash/commands/reindex_event_test.go
+++ b/cmd/tenderdash/commands/reindex_event_test.go
@@ -8,14 +8,15 @@ import (
"github.com/spf13/cobra"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
+ dbm "github.com/tendermint/tm-db"
abcitypes "github.com/tendermint/tendermint/abci/types"
- tmcfg "github.com/tendermint/tendermint/config"
+ "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/internal/state/indexer"
"github.com/tendermint/tendermint/internal/state/mocks"
+ "github.com/tendermint/tendermint/libs/log"
prototmstate "github.com/tendermint/tendermint/proto/tendermint/state"
"github.com/tendermint/tendermint/types"
- dbm "github.com/tendermint/tm-db"
_ "github.com/lib/pq" // for the psql sink
)
@@ -25,13 +26,15 @@ const (
base int64 = 2
)
-func setupReIndexEventCmd() *cobra.Command {
+func setupReIndexEventCmd(ctx context.Context, conf *config.Config, logger log.Logger) *cobra.Command {
+ cmd := MakeReindexEventCommand(conf, logger)
+
reIndexEventCmd := &cobra.Command{
- Use: ReIndexEventCmd.Use,
+ Use: cmd.Use,
Run: func(cmd *cobra.Command, args []string) {},
}
- _ = reIndexEventCmd.ExecuteContext(context.Background())
+ _ = reIndexEventCmd.ExecuteContext(ctx)
return reIndexEventCmd
}
@@ -68,10 +71,7 @@ func TestReIndexEventCheckHeight(t *testing.T) {
}
for _, tc := range testCases {
- startHeight = tc.startHeight
- endHeight = tc.endHeight
-
- err := checkValidHeight(mockBlockStore)
+ err := checkValidHeight(mockBlockStore, checkValidHeightArgs{startHeight: tc.startHeight, endHeight: tc.endHeight})
if tc.validHeight {
require.NoError(t, err)
} else {
@@ -97,7 +97,7 @@ func TestLoadEventSink(t *testing.T) {
}
for _, tc := range testCases {
- cfg := tmcfg.TestConfig()
+ cfg := config.TestConfig()
cfg.TxIndex.Indexer = tc.sinks
cfg.TxIndex.PsqlConn = tc.connURL
_, err := loadEventSinks(cfg)
@@ -110,7 +110,7 @@ func TestLoadEventSink(t *testing.T) {
}
func TestLoadBlockStore(t *testing.T) {
- testCfg, err := tmcfg.ResetTestRoot(t.Name())
+ testCfg, err := config.ResetTestRoot(t.TempDir(), t.Name())
require.NoError(t, err)
testCfg.DBBackend = "goleveldb"
_, _, err = loadStateAndBlockStore(testCfg)
@@ -152,11 +152,11 @@ func TestReIndexEvent(t *testing.T) {
On("IndexTxEvents", mock.AnythingOfType("[]*types.TxResult")).Return(errors.New("")).Once().
On("IndexTxEvents", mock.AnythingOfType("[]*types.TxResult")).Return(nil)
- dtx := abcitypes.ResponseDeliverTx{}
+ dtx := abcitypes.ExecTxResult{}
abciResp := &prototmstate.ABCIResponses{
- DeliverTxs: []*abcitypes.ResponseDeliverTx{&dtx},
- EndBlock: &abcitypes.ResponseEndBlock{},
- BeginBlock: &abcitypes.ResponseBeginBlock{},
+ FinalizeBlock: &abcitypes.ResponseFinalizeBlock{
+ TxResults: []*abcitypes.ExecTxResult{&dtx},
+ },
}
mockStateStore.
@@ -177,11 +177,22 @@ func TestReIndexEvent(t *testing.T) {
{height, height, false},
}
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+ logger := log.NewNopLogger()
+ conf := config.DefaultConfig()
+
for _, tc := range testCases {
- startHeight = tc.startHeight
- endHeight = tc.endHeight
+ err := eventReIndex(
+ setupReIndexEventCmd(ctx, conf, logger),
+ eventReIndexArgs{
+ sinks: []indexer.EventSink{mockEventSink},
+ blockStore: mockBlockStore,
+ stateStore: mockStateStore,
+ startHeight: tc.startHeight,
+ endHeight: tc.endHeight,
+ })
- err := eventReIndex(setupReIndexEventCmd(), []indexer.EventSink{mockEventSink}, mockBlockStore, mockStateStore)
if tc.reIndexErr {
require.Error(t, err)
} else {
diff --git a/cmd/tenderdash/commands/replay.go b/cmd/tenderdash/commands/replay.go
index 023921d9e4..fb6f19e55d 100644
--- a/cmd/tenderdash/commands/replay.go
+++ b/cmd/tenderdash/commands/replay.go
@@ -2,25 +2,30 @@ package commands
import (
"github.com/spf13/cobra"
+
+ "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/internal/consensus"
+ "github.com/tendermint/tendermint/libs/log"
)
-// ReplayCmd allows replaying of messages from the WAL.
-var ReplayCmd = &cobra.Command{
- Use: "replay",
- Short: "Replay messages from WAL",
- Run: func(cmd *cobra.Command, args []string) {
- consensus.RunReplayFile(config.BaseConfig, config.Consensus, false)
- },
+// MakeReplayCommand constructs a command to replay messages from the WAL into consensus.
+func MakeReplayCommand(conf *config.Config, logger log.Logger) *cobra.Command {
+ return &cobra.Command{
+ Use: "replay",
+ Short: "Replay messages from WAL",
+ RunE: func(cmd *cobra.Command, args []string) error {
+ return consensus.RunReplayFile(cmd.Context(), logger, conf.BaseConfig, conf.Consensus, false)
+ },
+ }
}
-// ReplayConsoleCmd allows replaying of messages from the WAL in a
-// console.
-var ReplayConsoleCmd = &cobra.Command{
- Use: "replay-console",
- Short: "Replay messages from WAL in a console",
- Run: func(cmd *cobra.Command, args []string) {
- consensus.RunReplayFile(config.BaseConfig, config.Consensus, true)
- },
- PreRun: deprecateSnakeCase,
+// MakeReplayConsoleCommand constructs a command to replay WAL messages to stdout.
+func MakeReplayConsoleCommand(conf *config.Config, logger log.Logger) *cobra.Command {
+ return &cobra.Command{
+ Use: "replay-console",
+ Short: "Replay messages from WAL in a console",
+ RunE: func(cmd *cobra.Command, args []string) error {
+ return consensus.RunReplayFile(cmd.Context(), logger, conf.BaseConfig, conf.Consensus, true)
+ },
+ }
}
diff --git a/cmd/tenderdash/commands/reset.go b/cmd/tenderdash/commands/reset.go
new file mode 100644
index 0000000000..38beffb629
--- /dev/null
+++ b/cmd/tenderdash/commands/reset.go
@@ -0,0 +1,179 @@
+package commands
+
+import (
+ "os"
+ "path/filepath"
+
+ "github.com/spf13/cobra"
+
+ "github.com/tendermint/tendermint/config"
+ "github.com/tendermint/tendermint/libs/log"
+ tmos "github.com/tendermint/tendermint/libs/os"
+ "github.com/tendermint/tendermint/privval"
+ "github.com/tendermint/tendermint/types"
+)
+
+// MakeResetCommand constructs a command that removes the database of
+// the specified Tendermint core instance.
+func MakeResetCommand(conf *config.Config, logger log.Logger) *cobra.Command {
+ var keyType string
+
+ resetCmd := &cobra.Command{
+ Use: "reset",
+ Short: "Set of commands to conveniently reset tendermint related data",
+ }
+
+ resetBlocksCmd := &cobra.Command{
+ Use: "blockchain",
+ Short: "Removes all blocks, state, transactions and evidence stored by the tendermint node",
+ RunE: func(cmd *cobra.Command, args []string) error {
+ return ResetState(conf.DBDir(), logger)
+ },
+ }
+
+ resetPeersCmd := &cobra.Command{
+ Use: "peers",
+ Short: "Removes all peer addresses",
+ RunE: func(cmd *cobra.Command, args []string) error {
+ return ResetPeerStore(conf.DBDir())
+ },
+ }
+
+ resetSignerCmd := &cobra.Command{
+ Use: "unsafe-signer",
+ Short: "esets private validator signer state",
+ Long: `Resets private validator signer state.
+Only use in testing. This can cause the node to double sign`,
+ RunE: func(cmd *cobra.Command, args []string) error {
+ return ResetFilePV(conf.PrivValidator.KeyFile(), conf.PrivValidator.StateFile(), logger, keyType)
+ },
+ }
+
+ resetAllCmd := &cobra.Command{
+ Use: "unsafe-all",
+ Short: "Removes all tendermint data including signing state",
+ Long: `Removes all tendermint data including signing state.
+Only use in testing. This can cause the node to double sign`,
+ RunE: func(cmd *cobra.Command, args []string) error {
+ return ResetAll(conf.DBDir(), conf.PrivValidator.KeyFile(),
+ conf.PrivValidator.StateFile(), logger, keyType)
+ },
+ }
+
+ resetSignerCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
+ "Signer key type. Options: ed25519, secp256k1")
+
+ resetAllCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
+ "Signer key type. Options: ed25519, secp256k1")
+
+ resetCmd.AddCommand(resetBlocksCmd)
+ resetCmd.AddCommand(resetPeersCmd)
+ resetCmd.AddCommand(resetSignerCmd)
+ resetCmd.AddCommand(resetAllCmd)
+
+ return resetCmd
+}
+
+// ResetAll removes address book files plus all data, and resets the privValdiator data.
+// Exported for extenal CLI usage
+// XXX: this is unsafe and should only suitable for testnets.
+func ResetAll(dbDir, privValKeyFile, privValStateFile string, logger log.Logger, keyType string) error {
+ if err := os.RemoveAll(dbDir); err == nil {
+ logger.Info("Removed all blockchain history", "dir", dbDir)
+ } else {
+ logger.Error("error removing all blockchain history", "dir", dbDir, "err", err)
+ }
+
+ if err := tmos.EnsureDir(dbDir, 0700); err != nil {
+ logger.Error("unable to recreate dbDir", "err", err)
+ }
+
+ // recreate the dbDir since the privVal state needs to live there
+ return ResetFilePV(privValKeyFile, privValStateFile, logger, keyType)
+}
+
+// ResetState removes all blocks, tendermint state, indexed transactions and evidence.
+func ResetState(dbDir string, logger log.Logger) error {
+ blockdb := filepath.Join(dbDir, "blockstore.db")
+ state := filepath.Join(dbDir, "state.db")
+ wal := filepath.Join(dbDir, "cs.wal")
+ evidence := filepath.Join(dbDir, "evidence.db")
+ txIndex := filepath.Join(dbDir, "tx_index.db")
+
+ if tmos.FileExists(blockdb) {
+ if err := os.RemoveAll(blockdb); err == nil {
+ logger.Info("Removed all blockstore.db", "dir", blockdb)
+ } else {
+ logger.Error("error removing all blockstore.db", "dir", blockdb, "err", err)
+ }
+ }
+
+ if tmos.FileExists(state) {
+ if err := os.RemoveAll(state); err == nil {
+ logger.Info("Removed all state.db", "dir", state)
+ } else {
+ logger.Error("error removing all state.db", "dir", state, "err", err)
+ }
+ }
+
+ if tmos.FileExists(wal) {
+ if err := os.RemoveAll(wal); err == nil {
+ logger.Info("Removed all cs.wal", "dir", wal)
+ } else {
+ logger.Error("error removing all cs.wal", "dir", wal, "err", err)
+ }
+ }
+
+ if tmos.FileExists(evidence) {
+ if err := os.RemoveAll(evidence); err == nil {
+ logger.Info("Removed all evidence.db", "dir", evidence)
+ } else {
+ logger.Error("error removing all evidence.db", "dir", evidence, "err", err)
+ }
+ }
+
+ if tmos.FileExists(txIndex) {
+ if err := os.RemoveAll(txIndex); err == nil {
+ logger.Info("Removed tx_index.db", "dir", txIndex)
+ } else {
+ logger.Error("error removing tx_index.db", "dir", txIndex, "err", err)
+ }
+ }
+
+ return tmos.EnsureDir(dbDir, 0700)
+}
+
+// ResetFilePV loads the file private validator and resets the watermark to 0. If used on an existing network,
+// this can cause the node to double sign.
+// XXX: this is unsafe and should only suitable for testnets.
+func ResetFilePV(privValKeyFile, privValStateFile string, logger log.Logger, keyType string) error {
+ if _, err := os.Stat(privValKeyFile); err == nil {
+ pv, err := privval.LoadFilePVEmptyState(privValKeyFile, privValStateFile)
+ if err != nil {
+ return err
+ }
+ if err := pv.Reset(); err != nil {
+ return err
+ }
+ logger.Info("Reset private validator file to genesis state", "keyFile", privValKeyFile,
+ "stateFile", privValStateFile)
+ } else {
+ pv := privval.GenFilePV(privValKeyFile, privValStateFile)
+ if err := pv.Save(); err != nil {
+ return err
+ }
+ logger.Info("Generated private validator file", "keyFile", privValKeyFile,
+ "stateFile", privValStateFile)
+ }
+ return nil
+}
+
+// ResetPeerStore removes the peer store containing all information used by the tendermint networking layer
+// In the case of a reset, new peers will need to be set either via the config or through the discovery mechanism
+func ResetPeerStore(dbDir string) error {
+ peerstore := filepath.Join(dbDir, "peerstore.db")
+ if tmos.FileExists(peerstore) {
+ return os.RemoveAll(peerstore)
+ }
+ return nil
+}
diff --git a/cmd/tenderdash/commands/reset_priv_validator.go b/cmd/tenderdash/commands/reset_priv_validator.go
deleted file mode 100644
index 06e18a19d2..0000000000
--- a/cmd/tenderdash/commands/reset_priv_validator.go
+++ /dev/null
@@ -1,94 +0,0 @@
-package commands
-
-import (
- "os"
-
- "github.com/spf13/cobra"
-
- "github.com/tendermint/tendermint/libs/log"
- tmos "github.com/tendermint/tendermint/libs/os"
- "github.com/tendermint/tendermint/privval"
-)
-
-// ResetAllCmd removes the database of this Tendermint core
-// instance.
-var ResetAllCmd = &cobra.Command{
- Use: "unsafe-reset-all",
- Short: "(unsafe) Remove all the data and WAL, reset this node's validator to genesis state",
- RunE: resetAll,
-}
-
-var keepAddrBook bool
-
-func init() {
- ResetAllCmd.Flags().BoolVar(&keepAddrBook, "keep-addr-book", false, "keep the address book intact")
-}
-
-// ResetPrivValidatorCmd resets the private validator files.
-var ResetPrivValidatorCmd = &cobra.Command{
- Use: "unsafe-reset-priv-validator",
- Short: "(unsafe) Reset this node's validator to genesis state",
- RunE: resetPrivValidator,
-}
-
-// XXX: this is totally unsafe.
-// it's only suitable for testnets.
-func resetAll(cmd *cobra.Command, args []string) error {
- return ResetAll(config.DBDir(), config.P2P.AddrBookFile(), config.PrivValidator.KeyFile(),
- config.PrivValidator.StateFile(), logger)
-}
-
-// XXX: this is totally unsafe.
-// it's only suitable for testnets.
-func resetPrivValidator(cmd *cobra.Command, args []string) error {
- return resetFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile(), logger)
-}
-
-// ResetAll removes address book files plus all data, and resets the privValdiator data.
-// Exported so other CLI tools can use it.
-func ResetAll(dbDir, addrBookFile, privValKeyFile, privValStateFile string, logger log.Logger) error {
- if keepAddrBook {
- logger.Info("The address book remains intact")
- } else {
- removeAddrBook(addrBookFile, logger)
- }
- if err := os.RemoveAll(dbDir); err == nil {
- logger.Info("Removed all blockchain history", "dir", dbDir)
- } else {
- logger.Error("Error removing all blockchain history", "dir", dbDir, "err", err)
- }
- // recreate the dbDir since the privVal state needs to live there
- if err := tmos.EnsureDir(dbDir, 0700); err != nil {
- logger.Error("unable to recreate dbDir", "err", err)
- }
- return resetFilePV(privValKeyFile, privValStateFile, logger)
-}
-
-func resetFilePV(privValKeyFile, privValStateFile string, logger log.Logger) error {
- if _, err := os.Stat(privValKeyFile); err == nil {
- pv, err := privval.LoadFilePVEmptyState(privValKeyFile, privValStateFile)
- if err != nil {
- return err
- }
- pv.Reset()
- logger.Info("Reset private validator file to genesis state", "keyFile", privValKeyFile,
- "stateFile", privValStateFile)
- } else {
- pv := privval.GenFilePV(privValKeyFile, privValStateFile)
- if err != nil {
- return err
- }
- pv.Save()
- logger.Info("Generated private validator file", "keyFile", privValKeyFile,
- "stateFile", privValStateFile)
- }
- return nil
-}
-
-func removeAddrBook(addrBookFile string, logger log.Logger) {
- if err := os.Remove(addrBookFile); err == nil {
- logger.Info("Removed existing address book", "file", addrBookFile)
- } else if !os.IsNotExist(err) {
- logger.Info("Error removing address book", "file", addrBookFile, "err", err)
- }
-}
diff --git a/cmd/tenderdash/commands/reset_test.go b/cmd/tenderdash/commands/reset_test.go
new file mode 100644
index 0000000000..fd3963e885
--- /dev/null
+++ b/cmd/tenderdash/commands/reset_test.go
@@ -0,0 +1,62 @@
+package commands
+
+import (
+ "context"
+ "path/filepath"
+ "testing"
+
+ "github.com/stretchr/testify/require"
+
+ cfg "github.com/tendermint/tendermint/config"
+ "github.com/tendermint/tendermint/libs/log"
+ "github.com/tendermint/tendermint/privval"
+ "github.com/tendermint/tendermint/types"
+)
+
+func Test_ResetAll(t *testing.T) {
+ config := cfg.TestConfig()
+ dir := t.TempDir()
+ config.SetRoot(dir)
+ logger := log.NewNopLogger()
+ cfg.EnsureRoot(dir)
+ require.NoError(t, initFilesWithConfig(context.Background(), nodeConfig{Config: config}, logger))
+ pv, err := privval.LoadFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile())
+ require.NoError(t, err)
+ pv.LastSignState.Height = 10
+ require.NoError(t, pv.Save())
+ require.NoError(t, ResetAll(config.DBDir(), config.PrivValidator.KeyFile(),
+ config.PrivValidator.StateFile(), logger, types.ABCIPubKeyTypeEd25519))
+ require.DirExists(t, config.DBDir())
+ require.NoFileExists(t, filepath.Join(config.DBDir(), "block.db"))
+ require.NoFileExists(t, filepath.Join(config.DBDir(), "state.db"))
+ require.NoFileExists(t, filepath.Join(config.DBDir(), "evidence.db"))
+ require.NoFileExists(t, filepath.Join(config.DBDir(), "tx_index.db"))
+ require.FileExists(t, config.PrivValidator.StateFile())
+ pv, err = privval.LoadFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile())
+ require.NoError(t, err)
+ require.Equal(t, int64(0), pv.LastSignState.Height)
+}
+
+func Test_ResetState(t *testing.T) {
+ config := cfg.TestConfig()
+ dir := t.TempDir()
+ config.SetRoot(dir)
+ logger := log.NewNopLogger()
+ cfg.EnsureRoot(dir)
+ require.NoError(t, initFilesWithConfig(context.Background(), nodeConfig{Config: config}, logger))
+ pv, err := privval.LoadFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile())
+ require.NoError(t, err)
+ pv.LastSignState.Height = 10
+ require.NoError(t, pv.Save())
+ require.NoError(t, ResetState(config.DBDir(), logger))
+ require.DirExists(t, config.DBDir())
+ require.NoFileExists(t, filepath.Join(config.DBDir(), "block.db"))
+ require.NoFileExists(t, filepath.Join(config.DBDir(), "state.db"))
+ require.NoFileExists(t, filepath.Join(config.DBDir(), "evidence.db"))
+ require.NoFileExists(t, filepath.Join(config.DBDir(), "tx_index.db"))
+ require.FileExists(t, config.PrivValidator.StateFile())
+ pv, err = privval.LoadFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile())
+ require.NoError(t, err)
+ // private validator state should still be in tact.
+ require.Equal(t, int64(10), pv.LastSignState.Height)
+}
diff --git a/cmd/tenderdash/commands/rollback.go b/cmd/tenderdash/commands/rollback.go
index 8391ee506a..a604341783 100644
--- a/cmd/tenderdash/commands/rollback.go
+++ b/cmd/tenderdash/commands/rollback.go
@@ -5,14 +5,15 @@ import (
"github.com/spf13/cobra"
- cfg "github.com/tendermint/tendermint/config"
+ "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/internal/state"
)
-var RollbackStateCmd = &cobra.Command{
- Use: "rollback",
- Short: "rollback tendermint state by one height",
- Long: `
+func MakeRollbackStateCommand(conf *config.Config) *cobra.Command {
+ return &cobra.Command{
+ Use: "rollback",
+ Short: "rollback tendermint state by one height",
+ Long: `
A state rollback is performed to recover from an incorrect application state transition,
when Tendermint has persisted an incorrect app hash and is thus unable to make
progress. Rollback overwrites a state at height n with the state at height n - 1.
@@ -20,21 +21,23 @@ The application should also roll back to height n - 1. No blocks are removed, so
restarting Tendermint the transactions in block n will be re-executed against the
application.
`,
- RunE: func(cmd *cobra.Command, args []string) error {
- height, hash, err := RollbackState(config)
- if err != nil {
- return fmt.Errorf("failed to rollback state: %w", err)
- }
-
- fmt.Printf("Rolled back state to height %d and hash %X", height, hash)
- return nil
- },
+ RunE: func(cmd *cobra.Command, args []string) error {
+ height, hash, err := RollbackState(conf)
+ if err != nil {
+ return fmt.Errorf("failed to rollback state: %w", err)
+ }
+
+ fmt.Printf("Rolled back state to height %d and hash %X", height, hash)
+ return nil
+ },
+ }
+
}
// RollbackState takes the state at the current height n and overwrites it with the state
// at height n - 1. Note state here refers to tendermint state not application state.
// Returns the latest state height and app hash alongside an error if there was one.
-func RollbackState(config *cfg.Config) (int64, []byte, error) {
+func RollbackState(config *config.Config) (int64, []byte, error) {
// use the parsed config to load the block and state store
blockStore, stateStore, err := loadStateAndBlockStore(config)
if err != nil {
diff --git a/cmd/tenderdash/commands/root.go b/cmd/tenderdash/commands/root.go
index 02f260de57..fdee638bcb 100644
--- a/cmd/tenderdash/commands/root.go
+++ b/cmd/tenderdash/commands/root.go
@@ -2,73 +2,68 @@ package commands
import (
"fmt"
- "strings"
+ "os"
+ "path/filepath"
"time"
"github.com/spf13/cobra"
"github.com/spf13/viper"
- cfg "github.com/tendermint/tendermint/config"
+ "github.com/tendermint/tendermint/config"
+ "github.com/tendermint/tendermint/libs/cli"
"github.com/tendermint/tendermint/libs/log"
)
-var (
- config = cfg.DefaultConfig()
- logger = log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo, false)
- ctxTimeout = 4 * time.Second
-)
-
-func init() {
- registerFlagsRootCmd(RootCmd)
-}
-
-func registerFlagsRootCmd(cmd *cobra.Command) {
- cmd.PersistentFlags().String("log-level", config.LogLevel, "log level")
-}
+const ctxTimeout = 4 * time.Second
// ParseConfig retrieves the default environment configuration,
// sets up the Tendermint root and ensures that the root exists
-func ParseConfig() (*cfg.Config, error) {
- conf := cfg.DefaultConfig()
- err := viper.Unmarshal(conf)
- if err != nil {
+func ParseConfig(conf *config.Config) (*config.Config, error) {
+ if err := viper.Unmarshal(conf); err != nil {
return nil, err
}
+
conf.SetRoot(conf.RootDir)
- cfg.EnsureRoot(conf.RootDir)
+
if err := conf.ValidateBasic(); err != nil {
- return nil, fmt.Errorf("error in config file: %v", err)
+ return nil, fmt.Errorf("error in config file: %w", err)
}
return conf, nil
}
-// RootCmd is the root command for Tendermint core.
-var RootCmd = &cobra.Command{
- Use: "tendermint",
- Short: "BFT state machine replication for applications in any programming languages",
- PersistentPreRunE: func(cmd *cobra.Command, args []string) (err error) {
- if cmd.Name() == VersionCmd.Name() {
- return nil
- }
+// RootCommand constructs the root command-line entry point for Tendermint core.
+func RootCommand(conf *config.Config, logger log.Logger) *cobra.Command {
+ cmd := &cobra.Command{
+ Use: "tendermint",
+ Short: "BFT state machine replication for applications in any programming languages",
+ PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
+ if cmd.Name() == VersionCmd.Name() {
+ return nil
+ }
- config, err = ParseConfig()
- if err != nil {
- return err
- }
+ if err := cli.BindFlagsLoadViper(cmd, args); err != nil {
+ return err
+ }
- logger, err = log.NewDefaultLogger(config.LogFormat, config.LogLevel, false)
- if err != nil {
- return err
- }
+ pconf, err := ParseConfig(conf)
+ if err != nil {
+ return err
+ }
+ *conf = *pconf
+ config.EnsureRoot(conf.RootDir)
+ if err := log.OverrideWithNewLogger(logger, conf.LogFormat, conf.LogLevel); err != nil {
+ return err
+ }
+ if warning := pconf.DeprecatedFieldWarning(); warning != nil {
+ logger.Info("WARNING", "deprecated field warning", warning)
+ }
- logger = logger.With("module", "main")
- return nil
- },
-}
-
-// deprecateSnakeCase is a util function for 0.34.1. Should be removed in 0.35
-func deprecateSnakeCase(cmd *cobra.Command, args []string) {
- if strings.Contains(cmd.CalledAs(), "_") {
- fmt.Println("Deprecated: snake_case commands will be replaced by hyphen-case commands in the next major release")
+ return nil
+ },
}
+ cmd.PersistentFlags().StringP(cli.HomeFlag, "", os.ExpandEnv(filepath.Join("$HOME", config.DefaultTendermintDir)), "directory for config and data")
+ cmd.PersistentFlags().Bool(cli.TraceFlag, false, "print out full stack trace on errors")
+ cmd.PersistentFlags().String("log-level", conf.LogLevel, "log level")
+ cobra.OnInitialize(func() { cli.InitEnv("TM") })
+ return cmd
}
diff --git a/cmd/tenderdash/commands/root_test.go b/cmd/tenderdash/commands/root_test.go
index cd4bc9f5f7..a4f4fb08d5 100644
--- a/cmd/tenderdash/commands/root_test.go
+++ b/cmd/tenderdash/commands/root_test.go
@@ -1,11 +1,10 @@
package commands
import (
+ "context"
"fmt"
- "io/ioutil"
"os"
"path/filepath"
- "strconv"
"testing"
"github.com/spf13/cobra"
@@ -15,47 +14,54 @@ import (
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/libs/cli"
+ "github.com/tendermint/tendermint/libs/log"
tmos "github.com/tendermint/tendermint/libs/os"
)
-// clearConfig clears env vars, the given root dir, and resets viper.
-func clearConfig(dir string) {
- if err := os.Unsetenv("TMHOME"); err != nil {
- panic(err)
- }
- if err := os.Unsetenv("TM_HOME"); err != nil {
- panic(err)
+// writeConfigVals writes a toml file with the given values.
+// It returns an error if writing was impossible.
+func writeConfigVals(dir string, vals map[string]string) error {
+ data := ""
+ for k, v := range vals {
+ data += fmt.Sprintf("%s = \"%s\"\n", k, v)
}
+ cfile := filepath.Join(dir, "config.toml")
+ return os.WriteFile(cfile, []byte(data), 0600)
+}
+
+// clearConfig clears env vars, the given root dir, and resets viper.
+func clearConfig(t *testing.T, dir string) *cfg.Config {
+ t.Helper()
+ require.NoError(t, os.Unsetenv("TMHOME"))
+ require.NoError(t, os.Unsetenv("TM_HOME"))
+ require.NoError(t, os.RemoveAll(dir))
- if err := os.RemoveAll(dir); err != nil {
- panic(err)
- }
viper.Reset()
- config = cfg.DefaultConfig()
+ conf := cfg.DefaultConfig()
+ conf.RootDir = dir
+ return conf
}
// prepare new rootCmd
-func testRootCmd() *cobra.Command {
- rootCmd := &cobra.Command{
- Use: RootCmd.Use,
- PersistentPreRunE: RootCmd.PersistentPreRunE,
- Run: func(cmd *cobra.Command, args []string) {},
- }
- registerFlagsRootCmd(rootCmd)
+func testRootCmd(conf *cfg.Config) *cobra.Command {
+ logger := log.NewNopLogger()
+ cmd := RootCommand(conf, logger)
+ cmd.RunE = func(cmd *cobra.Command, args []string) error { return nil }
+
var l string
- rootCmd.PersistentFlags().String("log", l, "Log")
- return rootCmd
+ cmd.PersistentFlags().String("log", l, "Log")
+ return cmd
}
-func testSetup(rootDir string, args []string, env map[string]string) error {
- clearConfig(rootDir)
+func testSetup(ctx context.Context, t *testing.T, conf *cfg.Config, args []string, env map[string]string) error {
+ t.Helper()
- rootCmd := testRootCmd()
- cmd := cli.PrepareBaseCmd(rootCmd, "TM", rootDir)
+ cmd := testRootCmd(conf)
+ viper.Set(cli.HomeFlag, conf.RootDir)
// run with the args and env
- args = append([]string{rootCmd.Use}, args...)
- return cli.RunWithArgs(cmd, args, env)
+ args = append([]string{cmd.Use}, args...)
+ return cli.RunWithArgs(ctx, cmd, args, env)
}
func TestRootHome(t *testing.T) {
@@ -71,23 +77,29 @@ func TestRootHome(t *testing.T) {
{nil, map[string]string{"TMHOME": newRoot}, newRoot},
}
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+
for i, tc := range cases {
- idxString := strconv.Itoa(i)
+ t.Run(fmt.Sprint(i), func(t *testing.T) {
+ conf := clearConfig(t, tc.root)
- err := testSetup(defaultRoot, tc.args, tc.env)
- require.Nil(t, err, idxString)
+ err := testSetup(ctx, t, conf, tc.args, tc.env)
+ require.NoError(t, err)
- assert.Equal(t, tc.root, config.RootDir, idxString)
- assert.Equal(t, tc.root, config.P2P.RootDir, idxString)
- assert.Equal(t, tc.root, config.Consensus.RootDir, idxString)
- assert.Equal(t, tc.root, config.Mempool.RootDir, idxString)
+ require.Equal(t, tc.root, conf.RootDir)
+ require.Equal(t, tc.root, conf.P2P.RootDir)
+ require.Equal(t, tc.root, conf.Consensus.RootDir)
+ require.Equal(t, tc.root, conf.Mempool.RootDir)
+ })
}
}
func TestRootFlagsEnv(t *testing.T) {
-
// defaults
defaults := cfg.DefaultConfig()
+ defaultDir := t.TempDir()
+
defaultLogLvl := defaults.LogLevel
cases := []struct {
@@ -102,18 +114,25 @@ func TestRootFlagsEnv(t *testing.T) {
{nil, map[string]string{"TM_LOG_LEVEL": "debug"}, "debug"}, // right env
}
- defaultRoot := t.TempDir()
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+
for i, tc := range cases {
- idxString := strconv.Itoa(i)
+ t.Run(fmt.Sprint(i), func(t *testing.T) {
+ conf := clearConfig(t, defaultDir)
- err := testSetup(defaultRoot, tc.args, tc.env)
- require.Nil(t, err, idxString)
+ err := testSetup(ctx, t, conf, tc.args, tc.env)
+ require.NoError(t, err)
+
+ assert.Equal(t, tc.logLevel, conf.LogLevel)
+ })
- assert.Equal(t, tc.logLevel, config.LogLevel, idxString)
}
}
func TestRootConfig(t *testing.T) {
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
// write non-default config
nonDefaultLogLvl := "debug"
@@ -122,9 +141,8 @@ func TestRootConfig(t *testing.T) {
}
cases := []struct {
- args []string
- env map[string]string
-
+ args []string
+ env map[string]string
logLvl string
}{
{nil, nil, nonDefaultLogLvl}, // should load config
@@ -133,29 +151,31 @@ func TestRootConfig(t *testing.T) {
}
for i, tc := range cases {
- defaultRoot := t.TempDir()
- idxString := strconv.Itoa(i)
- clearConfig(defaultRoot)
-
- // XXX: path must match cfg.defaultConfigPath
- configFilePath := filepath.Join(defaultRoot, "config")
- err := tmos.EnsureDir(configFilePath, 0700)
- require.Nil(t, err)
-
- // write the non-defaults to a different path
- // TODO: support writing sub configs so we can test that too
- err = WriteConfigVals(configFilePath, cvals)
- require.Nil(t, err)
-
- rootCmd := testRootCmd()
- cmd := cli.PrepareBaseCmd(rootCmd, "TM", defaultRoot)
-
- // run with the args and env
- tc.args = append([]string{rootCmd.Use}, tc.args...)
- err = cli.RunWithArgs(cmd, tc.args, tc.env)
- require.Nil(t, err, idxString)
-
- assert.Equal(t, tc.logLvl, config.LogLevel, idxString)
+ t.Run(fmt.Sprint(i), func(t *testing.T) {
+ defaultRoot := t.TempDir()
+ conf := clearConfig(t, defaultRoot)
+ conf.LogLevel = tc.logLvl
+
+ // XXX: path must match cfg.defaultConfigPath
+ configFilePath := filepath.Join(defaultRoot, "config")
+ err := tmos.EnsureDir(configFilePath, 0700)
+ require.NoError(t, err)
+
+ // write the non-defaults to a different path
+ // TODO: support writing sub configs so we can test that too
+ err = writeConfigVals(configFilePath, cvals)
+ require.NoError(t, err)
+
+ cmd := testRootCmd(conf)
+ viper.Set(cli.HomeFlag, conf.RootDir)
+
+ // run with the args and env
+ tc.args = append([]string{cmd.Use}, tc.args...)
+ err = cli.RunWithArgs(ctx, cmd, tc.args, tc.env)
+ require.NoError(t, err)
+
+ require.Equal(t, tc.logLvl, conf.LogLevel)
+ })
}
}
@@ -167,5 +187,5 @@ func WriteConfigVals(dir string, vals map[string]string) error {
data += fmt.Sprintf("%s = \"%s\"\n", k, v)
}
cfile := filepath.Join(dir, "config.toml")
- return ioutil.WriteFile(cfile, []byte(data), 0600)
+ return os.WriteFile(cfile, []byte(data), 0600)
}
diff --git a/cmd/tenderdash/commands/run_node.go b/cmd/tenderdash/commands/run_node.go
index 435ce9ea4e..347a04034e 100644
--- a/cmd/tenderdash/commands/run_node.go
+++ b/cmd/tenderdash/commands/run_node.go
@@ -3,155 +3,128 @@ package commands
import (
"bytes"
"crypto/sha256"
- "errors"
- "flag"
"fmt"
"io"
"os"
+ "os/signal"
+ "syscall"
"github.com/spf13/cobra"
cfg "github.com/tendermint/tendermint/config"
- tmos "github.com/tendermint/tendermint/libs/os"
+ "github.com/tendermint/tendermint/libs/log"
)
var (
genesisHash []byte
)
-// AddNodeFlags exposes some common configuration options on the command-line
-// These are exposed for convenience of commands embedding a tendermint node
-func AddNodeFlags(cmd *cobra.Command) {
+// AddNodeFlags exposes some common configuration options from conf in the flag
+// set for cmd. This is a convenience for commands embedding a Tendermint node.
+func AddNodeFlags(cmd *cobra.Command, conf *cfg.Config) {
// bind flags
- cmd.Flags().String("moniker", config.Moniker, "node name")
+ cmd.Flags().String("moniker", conf.Moniker, "node name")
// mode flags
- cmd.Flags().String("mode", config.Mode, "node mode (full | validator | seed)")
+ cmd.Flags().String("mode", conf.Mode, "node mode (full | validator | seed)")
// priv val flags
cmd.Flags().String(
"priv-validator-laddr",
- config.PrivValidator.ListenAddr,
+ conf.PrivValidator.ListenAddr,
"socket address to listen on for connections from external priv-validator process")
// node flags
- cmd.Flags().Bool("blocksync.enable", config.BlockSync.Enable, "enable fast blockchain syncing")
-
- // TODO (https://github.com/tendermint/tendermint/issues/6908): remove this check after the v0.35 release cycle
- // This check was added to give users an upgrade prompt to use the new flag for syncing.
- //
- // The pflag package does not have a native way to print a depcrecation warning
- // and return an error. This logic was added to print a deprecation message to the user
- // and then crash if the user attempts to use the old --fast-sync flag.
- fs := flag.NewFlagSet("", flag.ExitOnError)
- fs.Func("fast-sync", "deprecated",
- func(string) error {
- return errors.New("--fast-sync has been deprecated, please use --blocksync.enable")
- })
- cmd.Flags().AddGoFlagSet(fs)
-
- cmd.Flags().MarkHidden("fast-sync") //nolint:errcheck
+
cmd.Flags().BytesHexVar(
&genesisHash,
"genesis-hash",
[]byte{},
"optional SHA-256 hash of the genesis file")
- cmd.Flags().Int64("consensus.double-sign-check-height", config.Consensus.DoubleSignCheckHeight,
+ cmd.Flags().Int64("consensus.double-sign-check-height", conf.Consensus.DoubleSignCheckHeight,
"how many blocks to look back to check existence of the node's "+
"consensus votes before joining consensus")
// abci flags
cmd.Flags().String(
"proxy-app",
- config.ProxyApp,
+ conf.ProxyApp,
"proxy app address, or one of: 'kvstore',"+
" 'persistent_kvstore', 'e2e' or 'noop' for local testing.")
- cmd.Flags().String("abci", config.ABCI, "specify abci transport (socket | grpc)")
+ cmd.Flags().String("abci", conf.ABCI, "specify abci transport (socket | grpc)")
// rpc flags
- cmd.Flags().String("rpc.laddr", config.RPC.ListenAddress, "RPC listen address. Port required")
- cmd.Flags().String(
- "rpc.grpc-laddr",
- config.RPC.GRPCListenAddress,
- "GRPC listen address (BroadcastTx only). Port required")
- cmd.Flags().Bool("rpc.unsafe", config.RPC.Unsafe, "enabled unsafe rpc methods")
- cmd.Flags().String("rpc.pprof-laddr", config.RPC.PprofListenAddress, "pprof listen address (https://golang.org/pkg/net/http/pprof)")
+ cmd.Flags().String("rpc.laddr", conf.RPC.ListenAddress, "RPC listen address. Port required")
+ cmd.Flags().Bool("rpc.unsafe", conf.RPC.Unsafe, "enabled unsafe rpc methods")
+ cmd.Flags().String("rpc.pprof-laddr", conf.RPC.PprofListenAddress, "pprof listen address (https://golang.org/pkg/net/http/pprof)")
// p2p flags
cmd.Flags().String(
"p2p.laddr",
- config.P2P.ListenAddress,
+ conf.P2P.ListenAddress,
"node listen address. (0.0.0.0:0 means any interface, any port)")
- cmd.Flags().String("p2p.seeds", config.P2P.Seeds, "comma-delimited ID@host:port seed nodes") //nolint: staticcheck
- cmd.Flags().String("p2p.persistent-peers", config.P2P.PersistentPeers, "comma-delimited ID@host:port persistent peers")
- cmd.Flags().String("p2p.unconditional-peer-ids",
- config.P2P.UnconditionalPeerIDs, "comma-delimited IDs of unconditional peers")
- cmd.Flags().Bool("p2p.upnp", config.P2P.UPNP, "enable/disable UPNP port forwarding")
- cmd.Flags().Bool("p2p.pex", config.P2P.PexReactor, "enable/disable Peer-Exchange")
- cmd.Flags().String("p2p.private-peer-ids", config.P2P.PrivatePeerIDs, "comma-delimited private peer IDs")
+ cmd.Flags().String("p2p.seeds", conf.P2P.Seeds, "comma-delimited ID@host:port seed nodes") //nolint: staticcheck
+ cmd.Flags().String("p2p.persistent-peers", conf.P2P.PersistentPeers, "comma-delimited ID@host:port persistent peers")
+ cmd.Flags().Bool("p2p.upnp", conf.P2P.UPNP, "enable/disable UPNP port forwarding")
+ cmd.Flags().Bool("p2p.pex", conf.P2P.PexReactor, "enable/disable Peer-Exchange")
+ cmd.Flags().String("p2p.private-peer-ids", conf.P2P.PrivatePeerIDs, "comma-delimited private peer IDs")
// consensus flags
cmd.Flags().Bool(
"consensus.create-empty-blocks",
- config.Consensus.CreateEmptyBlocks,
+ conf.Consensus.CreateEmptyBlocks,
"set this to false to only produce blocks when there are txs or when the AppHash changes")
cmd.Flags().String(
"consensus.create-empty-blocks-interval",
- config.Consensus.CreateEmptyBlocksInterval.String(),
+ conf.Consensus.CreateEmptyBlocksInterval.String(),
"the possible interval between empty blocks")
- addDBFlags(cmd)
+ addDBFlags(cmd, conf)
}
-func addDBFlags(cmd *cobra.Command) {
+func addDBFlags(cmd *cobra.Command, conf *cfg.Config) {
cmd.Flags().String(
"db-backend",
- config.DBBackend,
+ conf.DBBackend,
"database backend: goleveldb | cleveldb | boltdb | rocksdb | badgerdb")
cmd.Flags().String(
"db-dir",
- config.DBPath,
+ conf.DBPath,
"database directory")
}
// NewRunNodeCmd returns the command that allows the CLI to start a node.
// It can be used with a custom PrivValidator and in-process ABCI application.
-func NewRunNodeCmd(nodeProvider cfg.ServiceProvider) *cobra.Command {
+func NewRunNodeCmd(nodeProvider cfg.ServiceProvider, conf *cfg.Config, logger log.Logger) *cobra.Command {
cmd := &cobra.Command{
Use: "start",
Aliases: []string{"node", "run"},
Short: "Run the tendermint node",
RunE: func(cmd *cobra.Command, args []string) error {
- if err := checkGenesisHash(config); err != nil {
+ if err := checkGenesisHash(conf); err != nil {
return err
}
- n, err := nodeProvider(config, logger)
+ ctx, cancel := signal.NotifyContext(cmd.Context(), os.Interrupt, syscall.SIGTERM)
+ defer cancel()
+
+ n, err := nodeProvider(ctx, conf, logger)
if err != nil {
return fmt.Errorf("failed to create node: %w", err)
}
- if err := n.Start(); err != nil {
+ if err := n.Start(ctx); err != nil {
return fmt.Errorf("failed to start node: %w", err)
}
- logger.Info("started node", "node", n.String())
-
- // Stop upon receiving SIGTERM or CTRL-C.
- tmos.TrapSignal(logger, func() {
- if n.IsRunning() {
- if err := n.Stop(); err != nil {
- logger.Error("unable to stop the node", "error", err)
- }
- }
- })
+ logger.Info("started node", "chain", conf.ChainID())
- // Run forever.
- select {}
+ <-ctx.Done()
+ return nil
},
}
- AddNodeFlags(cmd)
+ AddNodeFlags(cmd, conf)
return cmd
}
diff --git a/cmd/tenderdash/commands/show_node_id.go b/cmd/tenderdash/commands/show_node_id.go
index 488f4c3228..ffc6c4d5e0 100644
--- a/cmd/tenderdash/commands/show_node_id.go
+++ b/cmd/tenderdash/commands/show_node_id.go
@@ -4,21 +4,23 @@ import (
"fmt"
"github.com/spf13/cobra"
+
+ "github.com/tendermint/tendermint/config"
)
-// ShowNodeIDCmd dumps node's ID to the standard output.
-var ShowNodeIDCmd = &cobra.Command{
- Use: "show-node-id",
- Short: "Show this node's ID",
- RunE: showNodeID,
-}
+// MakeShowNodeIDCommand constructs a command to dump the node ID to stdout.
+func MakeShowNodeIDCommand(conf *config.Config) *cobra.Command {
+ return &cobra.Command{
+ Use: "show-node-id",
+ Short: "Show this node's ID",
+ RunE: func(cmd *cobra.Command, args []string) error {
+ nodeKeyID, err := conf.LoadNodeKeyID()
+ if err != nil {
+ return err
+ }
-func showNodeID(cmd *cobra.Command, args []string) error {
- nodeKeyID, err := config.LoadNodeKeyID()
- if err != nil {
- return err
+ fmt.Println(nodeKeyID)
+ return nil
+ },
}
-
- fmt.Println(nodeKeyID)
- return nil
}
diff --git a/cmd/tenderdash/commands/show_validator.go b/cmd/tenderdash/commands/show_validator.go
index 03ddecd9d6..548b2a3c51 100644
--- a/cmd/tenderdash/commands/show_validator.go
+++ b/cmd/tenderdash/commands/show_validator.go
@@ -6,74 +6,78 @@ import (
"github.com/spf13/cobra"
+ "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/crypto"
- tmjson "github.com/tendermint/tendermint/libs/json"
+ "github.com/tendermint/tendermint/internal/jsontypes"
+ "github.com/tendermint/tendermint/libs/log"
tmnet "github.com/tendermint/tendermint/libs/net"
tmos "github.com/tendermint/tendermint/libs/os"
"github.com/tendermint/tendermint/privval"
tmgrpc "github.com/tendermint/tendermint/privval/grpc"
)
-// ShowValidatorCmd adds capabilities for showing the validator info.
-var ShowValidatorCmd = &cobra.Command{
- Use: "show-validator",
- Short: "Show this node's validator info",
- RunE: showValidator,
-}
-
-func showValidator(cmd *cobra.Command, args []string) error {
- var (
- pubKey crypto.PubKey
- err error
- )
+// MakeShowValidatorCommand constructs a command to show the validator info.
+func MakeShowValidatorCommand(conf *config.Config, logger log.Logger) *cobra.Command {
+ return &cobra.Command{
+ Use: "show-validator",
+ Short: "Show this node's validator info",
+ RunE: func(cmd *cobra.Command, args []string) error {
+ var (
+ pubKey crypto.PubKey
+ err error
+ bctx = cmd.Context()
+ )
+ //TODO: remove once gRPC is the only supported protocol
+ protocol, _ := tmnet.ProtocolAndAddress(conf.PrivValidator.ListenAddr)
+ switch protocol {
+ case "grpc":
+ pvsc, err := tmgrpc.DialRemoteSigner(
+ bctx,
+ conf.PrivValidator,
+ conf.ChainID(),
+ logger,
+ conf.Instrumentation.Prometheus,
+ )
+ if err != nil {
+ return fmt.Errorf("can't connect to remote validator %w", err)
+ }
- //TODO: remove once gRPC is the only supported protocol
- protocol, _ := tmnet.ProtocolAndAddress(config.PrivValidator.ListenAddr)
- switch protocol {
- case "grpc":
- pvsc, err := tmgrpc.DialRemoteSigner(
- config.PrivValidator,
- config.ChainID(),
- logger,
- config.Instrumentation.Prometheus,
- )
- if err != nil {
- return fmt.Errorf("can't connect to remote validator %w", err)
- }
+ ctx, cancel := context.WithTimeout(bctx, ctxTimeout)
+ defer cancel()
- ctx, cancel := context.WithTimeout(context.TODO(), ctxTimeout)
- defer cancel()
+ _, err = pvsc.GetProTxHash(ctx)
+ if err != nil {
+ return fmt.Errorf("can't get proTxHash: %w", err)
+ }
+ default:
- proTxHash, err = pvsc.GetProTxHash(ctx)
- if err != nil {
- return fmt.Errorf("can't get proTxHash: %w", err)
- }
- default:
+ keyFilePath := conf.PrivValidator.KeyFile()
+ if !tmos.FileExists(keyFilePath) {
+ return fmt.Errorf("private validator file %s does not exist", keyFilePath)
+ }
- keyFilePath := config.PrivValidator.KeyFile()
- if !tmos.FileExists(keyFilePath) {
- return fmt.Errorf("private validator file %s does not exist", keyFilePath)
- }
+ pv, err := privval.LoadFilePV(keyFilePath, conf.PrivValidator.StateFile())
+ if err != nil {
+ return err
+ }
- pv, err := privval.LoadFilePV(keyFilePath, config.PrivValidator.StateFile())
- if err != nil {
- return err
- }
+ ctx, cancel := context.WithTimeout(bctx, ctxTimeout)
+ defer cancel()
- ctx, cancel := context.WithTimeout(context.TODO(), ctxTimeout)
- defer cancel()
+ _, err = pv.GetProTxHash(ctx)
+ if err != nil {
+ return fmt.Errorf("can't get proTxHash: %w", err)
+ }
+ }
- proTxHash, err = pv.GetProTxHash(ctx)
- if err != nil {
- return fmt.Errorf("can't get proTxHash: %w", err)
- }
- }
+ bz, err := jsontypes.Marshal(pubKey)
+ if err != nil {
+ return fmt.Errorf("failed to marshal private validator pubkey: %w", err)
+ }
- bz, err := tmjson.Marshal(pubKey)
- if err != nil {
- return fmt.Errorf("failed to marshal private validator pubkey: %w", err)
+ fmt.Println(string(bz))
+ return nil
+ },
}
- fmt.Println(string(bz))
- return nil
}
diff --git a/cmd/tenderdash/commands/testnet.go b/cmd/tenderdash/commands/testnet.go
index af7fcd46f9..50d82b21b0 100644
--- a/cmd/tenderdash/commands/testnet.go
+++ b/cmd/tenderdash/commands/testnet.go
@@ -15,283 +15,318 @@ import (
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/libs/bytes"
+ "github.com/tendermint/tendermint/libs/log"
tmrand "github.com/tendermint/tendermint/libs/rand"
tmtime "github.com/tendermint/tendermint/libs/time"
"github.com/tendermint/tendermint/privval"
"github.com/tendermint/tendermint/types"
)
-var (
- nValidators int
- nNonValidators int
- initialHeight int64
- configFile string
- outputDir string
- nodeDirPrefix string
-
- populatePersistentPeers bool
- hostnamePrefix string
- hostnameSuffix string
- startingIPAddress string
- hostnames []string
- p2pPort int
- randomMonikers bool
-)
-
const (
nodeDirPerm = 0755
)
-func init() {
- TestnetFilesCmd.Flags().IntVar(&nValidators, "v", 4,
+// MakeTestnetFilesCommand constructs a command to generate testnet config files.
+func MakeTestnetFilesCommand(conf *cfg.Config, logger log.Logger) *cobra.Command {
+ cmd := &cobra.Command{
+ Use: "testnet",
+ Short: "Initialize files for a Tendermint testnet",
+ Long: `testnet will create "v" + "n" number of directories and populate each with
+necessary files (private validator, genesis, config, etc.).
+
+Note, strict routability for addresses is turned off in the config file.
+
+Optionally, it will fill in persistent-peers list in config file using either hostnames or IPs.
+
+Example:
+
+ tendermint testnet --v 4 --o ./output --populate-persistent-peers --starting-ip-address 192.168.10.2
+ `,
+ }
+ var (
+ nValidators int
+ nNonValidators int
+ initialHeight int64
+ configFile string
+ outputDir string
+ nodeDirPrefix string
+
+ populatePersistentPeers bool
+ hostnamePrefix string
+ hostnameSuffix string
+ startingIPAddress string
+ hostnames []string
+ p2pPort int
+ randomMonikers bool
+ keyType string
+ )
+
+ cmd.Flags().IntVar(&nValidators, "v", 4,
"number of validators to initialize the testnet with")
- TestnetFilesCmd.Flags().StringVar(&configFile, "config", "",
+ cmd.Flags().StringVar(&configFile, "config", "",
"config file to use (note some options may be overwritten)")
- TestnetFilesCmd.Flags().IntVar(&nNonValidators, "n", 0,
+ cmd.Flags().IntVar(&nNonValidators, "n", 0,
"number of non-validators to initialize the testnet with")
- TestnetFilesCmd.Flags().StringVar(&outputDir, "o", "./mytestnet",
+ cmd.Flags().StringVar(&outputDir, "o", "./mytestnet",
"directory to store initialization data for the testnet")
- TestnetFilesCmd.Flags().StringVar(&nodeDirPrefix, "node-dir-prefix", "node",
+ cmd.Flags().StringVar(&nodeDirPrefix, "node-dir-prefix", "node",
"prefix the directory name for each node with (node results in node0, node1, ...)")
- TestnetFilesCmd.Flags().Int64Var(&initialHeight, "initial-height", 0,
+ cmd.Flags().Int64Var(&initialHeight, "initial-height", 0,
"initial height of the first block")
- TestnetFilesCmd.Flags().BoolVar(&populatePersistentPeers, "populate-persistent-peers", true,
+ cmd.Flags().BoolVar(&populatePersistentPeers, "populate-persistent-peers", true,
"update config of each node with the list of persistent peers build using either"+
" hostname-prefix or"+
" starting-ip-address")
- TestnetFilesCmd.Flags().StringVar(&hostnamePrefix, "hostname-prefix", "node",
+ cmd.Flags().StringVar(&hostnamePrefix, "hostname-prefix", "node",
"hostname prefix (\"node\" results in persistent peers list ID0@node0:26656, ID1@node1:26656, ...)")
- TestnetFilesCmd.Flags().StringVar(&hostnameSuffix, "hostname-suffix", "",
+ cmd.Flags().StringVar(&hostnameSuffix, "hostname-suffix", "",
"hostname suffix ("+
"\".xyz.com\""+
" results in persistent peers list ID0@node0.xyz.com:26656, ID1@node1.xyz.com:26656, ...)")
- TestnetFilesCmd.Flags().StringVar(&startingIPAddress, "starting-ip-address", "",
+ cmd.Flags().StringVar(&startingIPAddress, "starting-ip-address", "",
"starting IP address ("+
"\"192.168.0.1\""+
" results in persistent peers list ID0@192.168.0.1:26656, ID1@192.168.0.2:26656, ...)")
- TestnetFilesCmd.Flags().StringArrayVar(&hostnames, "hostname", []string{},
+ cmd.Flags().StringArrayVar(&hostnames, "hostname", []string{},
"manually override all hostnames of validators and non-validators (use --hostname multiple times for multiple hosts)")
- TestnetFilesCmd.Flags().IntVar(&p2pPort, "p2p-port", 26656,
+ cmd.Flags().IntVar(&p2pPort, "p2p-port", 26656,
"P2P Port")
- TestnetFilesCmd.Flags().BoolVar(&randomMonikers, "random-monikers", false,
+ cmd.Flags().BoolVar(&randomMonikers, "random-monikers", false,
"randomize the moniker for each generated node")
-}
-
-// TestnetFilesCmd allows initialisation of files for a Tendermint testnet.
-var TestnetFilesCmd = &cobra.Command{
- Use: "testnet",
- Short: "Initialize files for a Tendermint testnet",
- Long: `testnet will create "v" + "n" number of directories and populate each with
-necessary files (private validator, genesis, config, etc.).
-
-Note, strict routability for addresses is turned off in the config file.
-
-Optionally, it will fill in persistent-peers list in config file using either hostnames or IPs.
-
-Example:
-
- tendermint testnet --v 4 --o ./output --populate-persistent-peers --starting-ip-address 192.168.10.2
- `,
- RunE: testnetFiles,
-}
-
-func testnetFiles(cmd *cobra.Command, args []string) error {
- if len(hostnames) > 0 && len(hostnames) != (nValidators+nNonValidators) {
- return fmt.Errorf(
- "testnet needs precisely %d hostnames (number of validators plus non-validators) if --hostname parameter is used",
- nValidators+nNonValidators,
- )
- }
+ cmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
+ "Key type to generate privval file with. Options: ed25519, secp256k1")
+
+ cmd.RunE = func(cmd *cobra.Command, args []string) error {
+ if len(hostnames) > 0 && len(hostnames) != (nValidators+nNonValidators) {
+ return fmt.Errorf(
+ "testnet needs precisely %d hostnames (number of validators plus non-validators) if --hostname parameter is used",
+ nValidators+nNonValidators,
+ )
+ }
- // set mode to validator for testnet
- config := cfg.DefaultValidatorConfig()
+ // set mode to validator for testnet
+ config := cfg.DefaultValidatorConfig()
- // overwrite default config if set and valid
- if configFile != "" {
- viper.SetConfigFile(configFile)
- if err := viper.ReadInConfig(); err != nil {
- return err
- }
- if err := viper.Unmarshal(config); err != nil {
- return err
- }
- if err := config.ValidateBasic(); err != nil {
- return err
+ // overwrite default config if set and valid
+ if configFile != "" {
+ viper.SetConfigFile(configFile)
+ if err := viper.ReadInConfig(); err != nil {
+ return err
+ }
+ if err := viper.Unmarshal(config); err != nil {
+ return err
+ }
+ if err := config.ValidateBasic(); err != nil {
+ return err
+ }
}
- }
- genVals := make([]types.GenesisValidator, nValidators)
+ genVals := make([]types.GenesisValidator, nValidators)
+ ctx := cmd.Context()
+ for i := 0; i < nValidators; i++ {
+ nodeDirName := fmt.Sprintf("%s%d", nodeDirPrefix, i)
+ nodeDir := filepath.Join(outputDir, nodeDirName)
+ config.SetRoot(nodeDir)
+
+ err := os.MkdirAll(filepath.Join(nodeDir, "config"), nodeDirPerm)
+ if err != nil {
+ _ = os.RemoveAll(outputDir)
+ return err
+ }
+ err = os.MkdirAll(filepath.Join(nodeDir, "data"), nodeDirPerm)
+ if err != nil {
+ _ = os.RemoveAll(outputDir)
+ return err
+ }
- for i := 0; i < nValidators; i++ {
- nodeDirName := fmt.Sprintf("%s%d", nodeDirPrefix, i)
- nodeDir := filepath.Join(outputDir, nodeDirName)
- config.SetRoot(nodeDir)
+ if err := initFilesWithConfig(ctx, nodeConfig{Config: config}, logger); err != nil {
+ return err
+ }
- err := os.MkdirAll(filepath.Join(nodeDir, "config"), nodeDirPerm)
- if err != nil {
- _ = os.RemoveAll(outputDir)
- return err
- }
- err = os.MkdirAll(filepath.Join(nodeDir, "data"), nodeDirPerm)
- if err != nil {
- _ = os.RemoveAll(outputDir)
- return err
- }
+ pvKeyFile := filepath.Join(nodeDir, config.PrivValidator.Key)
+ pvStateFile := filepath.Join(nodeDir, config.PrivValidator.State)
+ pv, err := privval.LoadFilePV(pvKeyFile, pvStateFile)
+ if err != nil {
+ return err
+ }
- if err := initFilesWithConfig(config); err != nil {
- return err
- }
+ ctx, cancel := context.WithTimeout(ctx, ctxTimeout)
+ defer cancel()
- pvKeyFile := filepath.Join(nodeDir, config.PrivValidator.Key)
- pvStateFile := filepath.Join(nodeDir, config.PrivValidator.State)
- pv, err := privval.LoadFilePV(pvKeyFile, pvStateFile)
- if err != nil {
- return err
+ pubKey, err := pv.GetPubKey(ctx, crypto.QuorumHash{})
+ if err != nil {
+ return fmt.Errorf("can't get pubkey in testnet files: %w", err)
+ }
+ genVals[i] = types.GenesisValidator{
+ PubKey: pubKey,
+ Power: 1,
+ Name: nodeDirName,
+ }
}
- ctx, cancel := context.WithTimeout(context.TODO(), ctxTimeout)
- defer cancel()
+ for i := 0; i < nNonValidators; i++ {
+ nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i+nValidators))
+ config.SetRoot(nodeDir)
- pubKey, err := pv.GetPubKey(ctx, crypto.QuorumHash{})
- if err != nil {
- return fmt.Errorf("can't get pubkey in testnet files: %w", err)
- }
- genVals[i] = types.GenesisValidator{
- PubKey: pubKey,
- Power: 1,
- Name: nodeDirName,
- }
- }
+ err := os.MkdirAll(filepath.Join(nodeDir, "config"), nodeDirPerm)
+ if err != nil {
+ _ = os.RemoveAll(outputDir)
+ return err
+ }
- for i := 0; i < nNonValidators; i++ {
- nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i+nValidators))
- config.SetRoot(nodeDir)
+ err = os.MkdirAll(filepath.Join(nodeDir, "data"), nodeDirPerm)
+ if err != nil {
+ _ = os.RemoveAll(outputDir)
+ return err
+ }
- err := os.MkdirAll(filepath.Join(nodeDir, "config"), nodeDirPerm)
- if err != nil {
- _ = os.RemoveAll(outputDir)
- return err
+ if err := initFilesWithConfig(ctx, nodeConfig{Config: conf}, logger); err != nil {
+ return err
+ }
}
- err = os.MkdirAll(filepath.Join(nodeDir, "data"), nodeDirPerm)
- if err != nil {
- _ = os.RemoveAll(outputDir)
- return err
+ // Generate genesis doc from generated validators
+ genDoc := &types.GenesisDoc{
+ ChainID: "chain-" + tmrand.Str(6),
+ GenesisTime: tmtime.Now(),
+ InitialHeight: initialHeight,
+ Validators: genVals,
+ ConsensusParams: types.DefaultConsensusParams(),
}
-
- if err := initFilesWithConfig(config); err != nil {
- return err
+ if keyType == "secp256k1" {
+ genDoc.ConsensusParams.Validator = types.ValidatorParams{
+ PubKeyTypes: []string{types.ABCIPubKeyTypeSecp256k1},
+ }
}
- }
- // Generate genesis doc from generated validators
- genDoc := &types.GenesisDoc{
- ChainID: "chain-" + tmrand.Str(6),
- GenesisTime: tmtime.Now(),
- InitialHeight: initialHeight,
- Validators: genVals,
- ConsensusParams: types.DefaultConsensusParams(),
- }
- if keyType == "secp256k1" {
- genDoc.ConsensusParams.Validator = types.ValidatorParams{
- PubKeyTypes: []string{types.ABCIPubKeyTypeSecp256k1},
+ // Write genesis file.
+ for i := 0; i < nValidators+nNonValidators; i++ {
+ nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
+ if err := genDoc.SaveAs(filepath.Join(nodeDir, config.BaseConfig.Genesis)); err != nil {
+ _ = os.RemoveAll(outputDir)
+ return err
+ }
}
- }
- // Write genesis file.
- for i := 0; i < nValidators+nNonValidators; i++ {
- nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
- if err := genDoc.SaveAs(filepath.Join(nodeDir, config.BaseConfig.Genesis)); err != nil {
- _ = os.RemoveAll(outputDir)
- return err
+ // Gather persistent peer addresses.
+ var (
+ persistentPeers = make([]string, 0)
+ err error
+ )
+ tpargs := testnetPeerArgs{
+ numValidators: nValidators,
+ numNonValidators: nNonValidators,
+ peerToPeerPort: p2pPort,
+ nodeDirPrefix: nodeDirPrefix,
+ outputDir: outputDir,
+ hostnames: hostnames,
+ startingIPAddr: startingIPAddress,
+ hostnamePrefix: hostnamePrefix,
+ hostnameSuffix: hostnameSuffix,
+ randomMonikers: randomMonikers,
}
- }
- // Gather persistent peer addresses.
- var (
- persistentPeers = make([]string, 0)
- err error
- )
- if populatePersistentPeers {
- persistentPeers, err = persistentPeersArray(config)
- if err != nil {
- _ = os.RemoveAll(outputDir)
- return err
+ if populatePersistentPeers {
+
+ persistentPeers, err = persistentPeersArray(config, tpargs)
+ if err != nil {
+ _ = os.RemoveAll(outputDir)
+ return err
+ }
}
- }
- // Overwrite default config.
- for i := 0; i < nValidators+nNonValidators; i++ {
- nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
- config.SetRoot(nodeDir)
- config.P2P.AddrBookStrict = false
- config.P2P.AllowDuplicateIP = true
- if populatePersistentPeers {
- persistentPeersWithoutSelf := make([]string, 0)
- for j := 0; j < len(persistentPeers); j++ {
- if j == i {
- continue
+ // Overwrite default config.
+ for i := 0; i < nValidators+nNonValidators; i++ {
+ nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
+ config.SetRoot(nodeDir)
+ config.P2P.AllowDuplicateIP = true
+ if populatePersistentPeers {
+ persistentPeersWithoutSelf := make([]string, 0)
+ for j := 0; j < len(persistentPeers); j++ {
+ if j == i {
+ continue
+ }
+ persistentPeersWithoutSelf = append(persistentPeersWithoutSelf, persistentPeers[j])
}
- persistentPeersWithoutSelf = append(persistentPeersWithoutSelf, persistentPeers[j])
+ config.P2P.PersistentPeers = strings.Join(persistentPeersWithoutSelf, ",")
}
- config.P2P.PersistentPeers = strings.Join(persistentPeersWithoutSelf, ",")
- }
- config.Moniker = moniker(i)
+ config.Moniker = tpargs.moniker(i)
- if err := cfg.WriteConfigFile(nodeDir, config); err != nil {
- return err
+ if err := cfg.WriteConfigFile(nodeDir, config); err != nil {
+ return err
+ }
}
+
+ fmt.Printf("Successfully initialized %v node directories\n", nValidators+nNonValidators)
+ return nil
}
- fmt.Printf("Successfully initialized %v node directories\n", nValidators+nNonValidators)
- return nil
+ return cmd
}
-func hostnameOrIP(i int) string {
- if len(hostnames) > 0 && i < len(hostnames) {
- return hostnames[i]
+type testnetPeerArgs struct {
+ numValidators int
+ numNonValidators int
+ peerToPeerPort int
+ nodeDirPrefix string
+ outputDir string
+ hostnames []string
+ startingIPAddr string
+ hostnamePrefix string
+ hostnameSuffix string
+ randomMonikers bool
+}
+
+func (args *testnetPeerArgs) hostnameOrIP(i int) (string, error) {
+ if len(args.hostnames) > 0 && i < len(args.hostnames) {
+ return args.hostnames[i], nil
}
- if startingIPAddress == "" {
- return fmt.Sprintf("%s%d%s", hostnamePrefix, i, hostnameSuffix)
+ if args.startingIPAddr == "" {
+ return fmt.Sprintf("%s%d%s", args.hostnamePrefix, i, args.hostnameSuffix), nil
}
- ip := net.ParseIP(startingIPAddress)
+ ip := net.ParseIP(args.startingIPAddr)
ip = ip.To4()
if ip == nil {
- fmt.Printf("%v: non ipv4 address\n", startingIPAddress)
- os.Exit(1)
+ return "", fmt.Errorf("%v is non-ipv4 address", args.startingIPAddr)
}
for j := 0; j < i; j++ {
ip[3]++
}
- return ip.String()
+ return ip.String(), nil
+
}
// get an array of persistent peers
-func persistentPeersArray(config *cfg.Config) ([]string, error) {
- peers := make([]string, nValidators+nNonValidators)
- for i := 0; i < nValidators+nNonValidators; i++ {
- nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
+func persistentPeersArray(config *cfg.Config, args testnetPeerArgs) ([]string, error) {
+ peers := make([]string, args.numValidators+args.numNonValidators)
+ for i := 0; i < len(peers); i++ {
+ nodeDir := filepath.Join(args.outputDir, fmt.Sprintf("%s%d", args.nodeDirPrefix, i))
config.SetRoot(nodeDir)
nodeKey, err := config.LoadNodeKeyID()
if err != nil {
- return []string{}, err
+ return nil, err
}
- peers[i] = nodeKey.AddressString(fmt.Sprintf("%s:%d", hostnameOrIP(i), p2pPort))
+ addr, err := args.hostnameOrIP(i)
+ if err != nil {
+ return nil, err
+ }
+
+ peers[i] = nodeKey.AddressString(fmt.Sprintf("%s:%d", addr, args.peerToPeerPort))
}
return peers, nil
}
-func moniker(i int) string {
- if randomMonikers {
+func (args *testnetPeerArgs) moniker(i int) string {
+ if args.randomMonikers {
return randomMoniker()
}
- if len(hostnames) > 0 && i < len(hostnames) {
- return hostnames[i]
+ if len(args.hostnames) > 0 && i < len(args.hostnames) {
+ return args.hostnames[i]
}
- if startingIPAddress == "" {
- return fmt.Sprintf("%s%d%s", hostnamePrefix, i, hostnameSuffix)
+ if args.startingIPAddr == "" {
+ return fmt.Sprintf("%s%d%s", args.hostnamePrefix, i, args.hostnameSuffix)
}
return randomMoniker()
}
diff --git a/cmd/tenderdash/main.go b/cmd/tenderdash/main.go
index f71b7538e1..7320267fbd 100644
--- a/cmd/tenderdash/main.go
+++ b/cmd/tenderdash/main.go
@@ -1,41 +1,48 @@
package main
import (
- "os"
- "path/filepath"
+ "context"
- cmd "github.com/tendermint/tendermint/cmd/tenderdash/commands"
+ "github.com/tendermint/tendermint/cmd/tenderdash/commands"
"github.com/tendermint/tendermint/cmd/tenderdash/commands/debug"
"github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/libs/cli"
+ "github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/node"
)
func main() {
- initFilesCommand := cmd.InitFilesCmd
- cmd.AddInitFlags(initFilesCommand)
-
- rootCmd := cmd.RootCmd
- rootCmd.AddCommand(
- cmd.GenValidatorCmd,
- cmd.ReIndexEventCmd,
- cmd.InitFilesCmd,
- cmd.ProbeUpnpCmd,
- cmd.LightCmd,
- cmd.ReplayCmd,
- cmd.ReplayConsoleCmd,
- cmd.ResetAllCmd,
- cmd.ResetPrivValidatorCmd,
- cmd.ShowValidatorCmd,
- cmd.TestnetFilesCmd,
- cmd.ShowNodeIDCmd,
- cmd.GenNodeKeyCmd,
- cmd.VersionCmd,
- cmd.InspectCmd,
- cmd.RollbackStateCmd,
- cmd.MakeKeyMigrateCommand(),
- debug.DebugCmd,
- cli.NewCompletionCmd(rootCmd, true),
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+
+ conf, err := commands.ParseConfig(config.DefaultConfig())
+ if err != nil {
+ panic(err)
+ }
+
+ logger, err := log.NewDefaultLogger(conf.LogFormat, conf.LogLevel)
+ if err != nil {
+ panic(err)
+ }
+
+ rcmd := commands.RootCommand(conf, logger)
+ rcmd.AddCommand(
+ commands.MakeGenValidatorCommand(),
+ commands.MakeReindexEventCommand(conf, logger),
+ commands.MakeInitFilesCommand(conf, logger),
+ commands.MakeLightCommand(conf, logger),
+ commands.MakeReplayCommand(conf, logger),
+ commands.MakeReplayConsoleCommand(conf, logger),
+ commands.MakeShowValidatorCommand(conf, logger),
+ commands.MakeTestnetFilesCommand(conf, logger),
+ commands.MakeShowNodeIDCommand(conf),
+ commands.GenNodeKeyCmd,
+ commands.VersionCmd,
+ commands.MakeInspectCommand(conf, logger),
+ commands.MakeRollbackStateCommand(conf),
+ commands.MakeKeyMigrateCommand(conf, logger),
+ debug.GetDebugCommand(logger),
+ commands.NewCompletionCmd(rcmd, true),
)
// NOTE:
@@ -49,10 +56,9 @@ func main() {
nodeFunc := node.NewDefault
// Create & start node
- rootCmd.AddCommand(cmd.NewRunNodeCmd(nodeFunc))
+ rcmd.AddCommand(commands.NewRunNodeCmd(nodeFunc, conf, logger))
- cmd := cli.PrepareBaseCmd(rootCmd, "TM", os.ExpandEnv(filepath.Join("$HOME", config.DefaultTendermintDir)))
- if err := cmd.Execute(); err != nil {
+ if err := cli.RunWithTrace(ctx, rcmd); err != nil {
panic(err)
}
}
diff --git a/config/config.go b/config/config.go
index 3af0a86be5..3ccab645ed 100644
--- a/config/config.go
+++ b/config/config.go
@@ -2,18 +2,18 @@ package config
import (
"encoding/hex"
+ "encoding/json"
"errors"
"fmt"
- "io/ioutil"
"net/http"
"os"
"path/filepath"
+ "strings"
"time"
"github.com/dashevo/dashd-go/btcjson"
"github.com/tendermint/tendermint/crypto"
- tmjson "github.com/tendermint/tendermint/libs/json"
"github.com/tendermint/tendermint/libs/log"
tmos "github.com/tendermint/tendermint/libs/os"
"github.com/tendermint/tendermint/types"
@@ -31,12 +31,6 @@ const (
ModeFull = "full"
ModeValidator = "validator"
ModeSeed = "seed"
-
- BlockSyncV0 = "v0"
- BlockSyncV2 = "v2"
-
- MempoolV0 = "v0"
- MempoolV1 = "v1"
)
// NOTE: Most of the structs & relevant comments + the
@@ -57,19 +51,14 @@ var (
defaultPrivValKeyName = "priv_validator_key.json"
defaultPrivValStateName = "priv_validator_state.json"
- defaultNodeKeyName = "node_key.json"
- defaultAddrBookName = "addrbook.json"
+ defaultNodeKeyName = "node_key.json"
defaultConfigFilePath = filepath.Join(defaultConfigDir, defaultConfigFileName)
defaultGenesisJSONPath = filepath.Join(defaultConfigDir, defaultGenesisJSONName)
defaultPrivValKeyPath = filepath.Join(defaultConfigDir, defaultPrivValKeyName)
defaultPrivValStatePath = filepath.Join(defaultDataDir, defaultPrivValStateName)
- defaultNodeKeyPath = filepath.Join(defaultConfigDir, defaultNodeKeyName)
- defaultAddrBookPath = filepath.Join(defaultConfigDir, defaultAddrBookName)
-
- minSubscriptionBufferSize = 100
- defaultSubscriptionBufferSize = 200
+ defaultNodeKeyPath = filepath.Join(defaultConfigDir, defaultNodeKeyName)
)
// Config defines the top level configuration for a Tendermint node
@@ -82,7 +71,6 @@ type Config struct {
P2P *P2PConfig `mapstructure:"p2p"`
Mempool *MempoolConfig `mapstructure:"mempool"`
StateSync *StateSyncConfig `mapstructure:"statesync"`
- BlockSync *BlockSyncConfig `mapstructure:"blocksync"`
Consensus *ConsensusConfig `mapstructure:"consensus"`
TxIndex *TxIndexConfig `mapstructure:"tx-index"`
Instrumentation *InstrumentationConfig `mapstructure:"instrumentation"`
@@ -97,7 +85,6 @@ func DefaultConfig() *Config {
P2P: DefaultP2PConfig(),
Mempool: DefaultMempoolConfig(),
StateSync: DefaultStateSyncConfig(),
- BlockSync: DefaultBlockSyncConfig(),
Consensus: DefaultConsensusConfig(),
TxIndex: DefaultTxIndexConfig(),
Instrumentation: DefaultInstrumentationConfig(),
@@ -120,7 +107,6 @@ func TestConfig() *Config {
P2P: TestP2PConfig(),
Mempool: TestMempoolConfig(),
StateSync: TestStateSyncConfig(),
- BlockSync: TestBlockSyncConfig(),
Consensus: TestConsensusConfig(),
TxIndex: TestTxIndexConfig(),
Instrumentation: TestInstrumentationConfig(),
@@ -148,18 +134,12 @@ func (cfg *Config) ValidateBasic() error {
if err := cfg.RPC.ValidateBasic(); err != nil {
return fmt.Errorf("error in [rpc] section: %w", err)
}
- if err := cfg.P2P.ValidateBasic(); err != nil {
- return fmt.Errorf("error in [p2p] section: %w", err)
- }
if err := cfg.Mempool.ValidateBasic(); err != nil {
return fmt.Errorf("error in [mempool] section: %w", err)
}
if err := cfg.StateSync.ValidateBasic(); err != nil {
return fmt.Errorf("error in [statesync] section: %w", err)
}
- if err := cfg.BlockSync.ValidateBasic(); err != nil {
- return fmt.Errorf("error in [blocksync] section: %w", err)
- }
if err := cfg.Consensus.ValidateBasic(); err != nil {
return fmt.Errorf("error in [consensus] section: %w", err)
}
@@ -169,6 +149,10 @@ func (cfg *Config) ValidateBasic() error {
return nil
}
+func (cfg *Config) DeprecatedFieldWarning() error {
+ return cfg.Consensus.DeprecatedFieldWarning()
+}
+
//-----------------------------------------------------------------------------
// BaseConfig
@@ -306,12 +290,12 @@ func (cfg BaseConfig) NodeKeyFile() string {
// LoadNodeKey loads NodeKey located in filePath.
func (cfg BaseConfig) LoadNodeKeyID() (types.NodeID, error) {
- jsonBytes, err := ioutil.ReadFile(cfg.NodeKeyFile())
+ jsonBytes, err := os.ReadFile(cfg.NodeKeyFile())
if err != nil {
return "", err
}
nodeKey := types.NodeKey{}
- err = tmjson.Unmarshal(jsonBytes, &nodeKey)
+ err = json.Unmarshal(jsonBytes, &nodeKey)
if err != nil {
return "", err
}
@@ -362,28 +346,6 @@ func (cfg BaseConfig) ValidateBasic() error {
return fmt.Errorf("unknown mode: %v", cfg.Mode)
}
- // TODO (https://github.com/tendermint/tendermint/issues/6908) remove this check after the v0.35 release cycle.
- // This check was added to give users an upgrade prompt to use the new
- // configuration option in v0.35. In future release cycles they should no longer
- // be using this configuration parameter so the check can be removed.
- // The cfg.Other field can likely be removed at the same time if it is not referenced
- // elsewhere as it was added to service this check.
- if fs, ok := cfg.Other["fastsync"]; ok {
- if _, ok := fs.(map[string]interface{}); ok {
- return fmt.Errorf("a configuration section named 'fastsync' was found in the " +
- "configuration file. The 'fastsync' section has been renamed to " +
- "'blocksync', please update the 'fastsync' field in your configuration file to 'blocksync'")
- }
- }
- if fs, ok := cfg.Other["fast-sync"]; ok {
- if fs != "" {
- return fmt.Errorf("a parameter named 'fast-sync' was found in the " +
- "configuration file. The parameter to enable or disable quickly syncing with a blockchain" +
- "has moved to the [blocksync] section of the configuration file as blocksync.enable. " +
- "Please move the 'fast-sync' field in your configuration file to 'blocksync.enable'")
- }
- }
-
return nil
}
@@ -498,24 +460,10 @@ type RPCConfig struct {
// A list of non simple headers the client is allowed to use with cross-domain requests.
CORSAllowedHeaders []string `mapstructure:"cors-allowed-headers"`
- // TCP or UNIX socket address for the gRPC server to listen on
- // NOTE: This server only supports /broadcast_tx_commit
- // Deprecated: gRPC in the RPC layer of Tendermint will be removed in 0.36.
- GRPCListenAddress string `mapstructure:"grpc-laddr"`
-
- // Maximum number of simultaneous connections.
- // Does not include RPC (HTTP&WebSocket) connections. See max-open-connections
- // If you want to accept a larger number than the default, make sure
- // you increase your OS limits.
- // 0 - unlimited.
- // Deprecated: gRPC in the RPC layer of Tendermint will be removed in 0.36.
- GRPCMaxOpenConnections int `mapstructure:"grpc-max-open-connections"`
-
// Activate unsafe RPC commands like /dial-persistent-peers and /unsafe-flush-mempool
Unsafe bool `mapstructure:"unsafe"`
// Maximum number of simultaneous connections (including WebSocket).
- // Does not include gRPC connections. See grpc-max-open-connections
// If you want to accept a larger number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
@@ -529,32 +477,36 @@ type RPCConfig struct {
MaxSubscriptionClients int `mapstructure:"max-subscription-clients"`
// Maximum number of unique queries a given client can /subscribe to
- // If you're using GRPC (or Local RPC client) and /broadcast_tx_commit, set
+ // If you're using a Local RPC client and /broadcast_tx_commit, set this
// to the estimated maximum number of broadcast_tx_commit calls per block.
MaxSubscriptionsPerClient int `mapstructure:"max-subscriptions-per-client"`
- // The number of events that can be buffered per subscription before
- // returning `ErrOutOfCapacity`.
- SubscriptionBufferSize int `mapstructure:"experimental-subscription-buffer-size"`
-
- // The maximum number of responses that can be buffered per WebSocket
- // client. If clients cannot read from the WebSocket endpoint fast enough,
- // they will be disconnected, so increasing this parameter may reduce the
- // chances of them being disconnected (but will cause the node to use more
- // memory).
+ // If true, disable the websocket interface to the RPC service. This has
+ // the effect of disabling the /subscribe, /unsubscribe, and /unsubscribe_all
+ // methods for event subscription.
//
- // Must be at least the same as `SubscriptionBufferSize`, otherwise
- // connections may be dropped unnecessarily.
- WebSocketWriteBufferSize int `mapstructure:"experimental-websocket-write-buffer-size"`
-
- // If a WebSocket client cannot read fast enough, at present we may
- // silently drop events instead of generating an error or disconnecting the
- // client.
+ // EXPERIMENTAL: This setting will be removed in Tendermint v0.37.
+ ExperimentalDisableWebsocket bool `mapstructure:"experimental-disable-websocket"`
+
+ // The time window size for the event log. All events up to this long before
+ // the latest (up to EventLogMaxItems) will be available for subscribers to
+ // fetch via the /events method. If 0 (the default) the event log and the
+ // /events RPC method are disabled.
+ EventLogWindowSize time.Duration `mapstructure:"event-log-window-size"`
+
+ // The maxiumum number of events that may be retained by the event log. If
+ // this value is 0, no upper limit is set. Otherwise, items in excess of
+ // this number will be discarded from the event log.
//
- // Enabling this parameter will cause the WebSocket connection to be closed
- // instead if it cannot read fast enough, allowing for greater
- // predictability in subscription behavior.
- CloseOnSlowClient bool `mapstructure:"experimental-close-on-slow-client"`
+ // Warning: This setting is a safety valve. Setting it too low may cause
+ // subscribers to miss events. Try to choose a value higher than the
+ // maximum worst-case expected event load within the chosen window size in
+ // ordinary operation.
+ //
+ // For example, if the window size is 10 minutes and the node typically
+ // averages 1000 events per ten minutes, but with occasional known spikes of
+ // up to 2000, choose a value > 2000.
+ EventLogMaxItems int `mapstructure:"event-log-max-items"`
// How long to wait for a tx to be committed during /broadcast_tx_commit
// WARNING: Using a value larger than 10s will result in increasing the
@@ -593,21 +545,22 @@ type RPCConfig struct {
// DefaultRPCConfig returns a default configuration for the RPC server
func DefaultRPCConfig() *RPCConfig {
return &RPCConfig{
- ListenAddress: "tcp://127.0.0.1:26657",
- CORSAllowedOrigins: []string{},
- CORSAllowedMethods: []string{http.MethodHead, http.MethodGet, http.MethodPost},
- CORSAllowedHeaders: []string{"Origin", "Accept", "Content-Type", "X-Requested-With", "X-Server-Time"},
- GRPCListenAddress: "",
- GRPCMaxOpenConnections: 900,
+ ListenAddress: "tcp://127.0.0.1:26657",
+ CORSAllowedOrigins: []string{},
+ CORSAllowedMethods: []string{http.MethodHead, http.MethodGet, http.MethodPost},
+ CORSAllowedHeaders: []string{"Origin", "Accept", "Content-Type", "X-Requested-With", "X-Server-Time"},
Unsafe: false,
MaxOpenConnections: 900,
- MaxSubscriptionClients: 100,
- MaxSubscriptionsPerClient: 5,
- SubscriptionBufferSize: defaultSubscriptionBufferSize,
- TimeoutBroadcastTxCommit: 10 * time.Second,
- WebSocketWriteBufferSize: defaultSubscriptionBufferSize,
+ // Settings for event subscription.
+ MaxSubscriptionClients: 100,
+ MaxSubscriptionsPerClient: 5,
+ ExperimentalDisableWebsocket: false, // compatible with TM v0.35 and earlier
+ EventLogWindowSize: 0, // disables /events RPC by default
+ EventLogMaxItems: 0,
+
+ TimeoutBroadcastTxCommit: 10 * time.Second,
MaxBodyBytes: int64(1000000), // 1MB
MaxHeaderBytes: 1 << 20, // same as the net/http default
@@ -621,7 +574,6 @@ func DefaultRPCConfig() *RPCConfig {
func TestRPCConfig() *RPCConfig {
cfg := DefaultRPCConfig()
cfg.ListenAddress = "tcp://127.0.0.1:36657"
- cfg.GRPCListenAddress = "tcp://127.0.0.1:36658"
cfg.Unsafe = true
return cfg
}
@@ -629,9 +581,6 @@ func TestRPCConfig() *RPCConfig {
// ValidateBasic performs basic validation (checking param bounds, etc.) and
// returns an error if any check fails.
func (cfg *RPCConfig) ValidateBasic() error {
- if cfg.GRPCMaxOpenConnections < 0 {
- return errors.New("grpc-max-open-connections can't be negative")
- }
if cfg.MaxOpenConnections < 0 {
return errors.New("max-open-connections can't be negative")
}
@@ -641,17 +590,11 @@ func (cfg *RPCConfig) ValidateBasic() error {
if cfg.MaxSubscriptionsPerClient < 0 {
return errors.New("max-subscriptions-per-client can't be negative")
}
- if cfg.SubscriptionBufferSize < minSubscriptionBufferSize {
- return fmt.Errorf(
- "experimental-subscription-buffer-size must be >= %d",
- minSubscriptionBufferSize,
- )
+ if cfg.EventLogWindowSize < 0 {
+ return errors.New("event-log-window-size must not be negative")
}
- if cfg.WebSocketWriteBufferSize < cfg.SubscriptionBufferSize {
- return fmt.Errorf(
- "experimental-websocket-write-buffer-size must be >= experimental-subscription-buffer-size (%d)",
- cfg.SubscriptionBufferSize,
- )
+ if cfg.EventLogMaxItems < 0 {
+ return errors.New("event-log-max-items must not be negative")
}
if cfg.TimeoutBroadcastTxCommit < 0 {
return errors.New("timeout-broadcast-tx-commit can't be negative")
@@ -723,25 +666,6 @@ type P2PConfig struct { //nolint: maligned
// UPNP port forwarding
UPNP bool `mapstructure:"upnp"`
- // Path to address book
- AddrBook string `mapstructure:"addr-book-file"`
-
- // Set true for strict address routability rules
- // Set false for private or local networks
- AddrBookStrict bool `mapstructure:"addr-book-strict"`
-
- // Maximum number of inbound peers
- //
- // TODO: Remove once p2p refactor is complete in favor of MaxConnections.
- // ref: https://github.com/tendermint/tendermint/issues/5670
- MaxNumInboundPeers int `mapstructure:"max-num-inbound-peers"`
-
- // Maximum number of outbound peers to connect to, excluding persistent peers.
- //
- // TODO: Remove once p2p refactor is complete in favor of MaxConnections.
- // ref: https://github.com/tendermint/tendermint/issues/5670
- MaxNumOutboundPeers int `mapstructure:"max-num-outbound-peers"`
-
// MaxConnections defines the maximum number of connected peers (inbound and
// outbound).
MaxConnections uint16 `mapstructure:"max-connections"`
@@ -750,11 +674,15 @@ type P2PConfig struct { //nolint: maligned
// attempts per IP address.
MaxIncomingConnectionAttempts uint `mapstructure:"max-incoming-connection-attempts"`
- // List of node IDs, to which a connection will be (re)established ignoring any existing limits
- UnconditionalPeerIDs string `mapstructure:"unconditional-peer-ids"`
+ // Set true to enable the peer-exchange reactor
+ PexReactor bool `mapstructure:"pex"`
+
+ // Comma separated list of peer IDs to keep private (will not be gossiped to
+ // other peers)
+ PrivatePeerIDs string `mapstructure:"private-peer-ids"`
- // Maximum pause when redialing a persistent peer (if zero, exponential backoff is used)
- PersistentPeersMaxDialPeriod time.Duration `mapstructure:"persistent-peers-max-dial-period"`
+ // Toggle to disable guard against peers connecting from the same ip.
+ AllowDuplicateIP bool `mapstructure:"allow-duplicate-ip"`
// Time to wait before flushing messages out on the connection
FlushThrottleTimeout time.Duration `mapstructure:"flush-throttle-timeout"`
@@ -768,16 +696,6 @@ type P2PConfig struct { //nolint: maligned
// Rate at which packets can be received, in bytes/second
RecvRate int64 `mapstructure:"recv-rate"`
- // Set true to enable the peer-exchange reactor
- PexReactor bool `mapstructure:"pex"`
-
- // Comma separated list of peer IDs to keep private (will not be gossiped to
- // other peers)
- PrivatePeerIDs string `mapstructure:"private-peer-ids"`
-
- // Toggle to disable guard against peers connecting from the same ip.
- AllowDuplicateIP bool `mapstructure:"allow-duplicate-ip"`
-
// Peer connection configuration.
HandshakeTimeout time.Duration `mapstructure:"handshake-timeout"`
DialTimeout time.Duration `mapstructure:"dial-timeout"`
@@ -786,13 +704,8 @@ type P2PConfig struct { //nolint: maligned
// Force dial to fail
TestDialFail bool `mapstructure:"test-dial-fail"`
- // UseLegacy enables the "legacy" P2P implementation and
- // disables the newer default implementation. This flag will
- // be removed in a future release.
- UseLegacy bool `mapstructure:"use-legacy"`
-
// Makes it possible to configure which queue backend the p2p
- // layer uses. Options are: "fifo", "priority" and "wdrr",
+ // layer uses. Options are: "fifo" and "priority",
// with the default being "priority".
QueueType string `mapstructure:"queue-type"`
}
@@ -803,13 +716,8 @@ func DefaultP2PConfig() *P2PConfig {
ListenAddress: "tcp://0.0.0.0:26656",
ExternalAddress: "",
UPNP: false,
- AddrBook: defaultAddrBookPath,
- AddrBookStrict: true,
- MaxNumInboundPeers: 40,
- MaxNumOutboundPeers: 10,
MaxConnections: 64,
MaxIncomingConnectionAttempts: 100,
- PersistentPeersMaxDialPeriod: 0 * time.Second,
FlushThrottleTimeout: 100 * time.Millisecond,
// The MTU (Maximum Transmission Unit) for Ethernet is 1500 bytes.
// The IP header and the TCP header take up 20 bytes each at least (unless
@@ -825,39 +733,15 @@ func DefaultP2PConfig() *P2PConfig {
DialTimeout: 3 * time.Second,
TestDialFail: false,
QueueType: "priority",
- UseLegacy: false,
}
}
-// TestP2PConfig returns a configuration for testing the peer-to-peer layer
-func TestP2PConfig() *P2PConfig {
- cfg := DefaultP2PConfig()
- cfg.ListenAddress = "tcp://127.0.0.1:36656"
- cfg.FlushThrottleTimeout = 10 * time.Millisecond
- cfg.AllowDuplicateIP = true
- return cfg
-}
-
-// AddrBookFile returns the full path to the address book
-func (cfg *P2PConfig) AddrBookFile() string {
- return rootify(cfg.AddrBook, cfg.RootDir)
-}
-
// ValidateBasic performs basic validation (checking param bounds, etc.) and
// returns an error if any check fails.
func (cfg *P2PConfig) ValidateBasic() error {
- if cfg.MaxNumInboundPeers < 0 {
- return errors.New("max-num-inbound-peers can't be negative")
- }
- if cfg.MaxNumOutboundPeers < 0 {
- return errors.New("max-num-outbound-peers can't be negative")
- }
if cfg.FlushThrottleTimeout < 0 {
return errors.New("flush-throttle-timeout can't be negative")
}
- if cfg.PersistentPeersMaxDialPeriod < 0 {
- return errors.New("persistent-peers-max-dial-period can't be negative")
- }
if cfg.MaxPacketMsgPayloadSize < 0 {
return errors.New("max-packet-msg-payload-size can't be negative")
}
@@ -870,12 +754,20 @@ func (cfg *P2PConfig) ValidateBasic() error {
return nil
}
+// TestP2PConfig returns a configuration for testing the peer-to-peer layer
+func TestP2PConfig() *P2PConfig {
+ cfg := DefaultP2PConfig()
+ cfg.ListenAddress = "tcp://127.0.0.1:36656"
+ cfg.AllowDuplicateIP = true
+ cfg.FlushThrottleTimeout = 10 * time.Millisecond
+ return cfg
+}
+
//-----------------------------------------------------------------------------
// MempoolConfig
// MempoolConfig defines the configuration options for the Tendermint mempool.
type MempoolConfig struct {
- Version string `mapstructure:"version"`
RootDir string `mapstructure:"home"`
Recheck bool `mapstructure:"recheck"`
Broadcast bool `mapstructure:"broadcast"`
@@ -925,7 +817,6 @@ type MempoolConfig struct {
// DefaultMempoolConfig returns a default configuration for the Tendermint mempool.
func DefaultMempoolConfig() *MempoolConfig {
return &MempoolConfig{
- Version: MempoolV1,
Recheck: true,
Broadcast: true,
// Each signature verification takes .5ms, Size reduced until we implement
@@ -1092,42 +983,6 @@ func (cfg *StateSyncConfig) ValidateBasic() error {
return nil
}
-//-----------------------------------------------------------------------------
-
-// BlockSyncConfig (formerly known as FastSync) defines the configuration for the Tendermint block sync service
-// If this node is many blocks behind the tip of the chain, BlockSync
-// allows them to catchup quickly by downloading blocks in parallel
-// and verifying their commits.
-type BlockSyncConfig struct {
- Enable bool `mapstructure:"enable"`
- Version string `mapstructure:"version"`
-}
-
-// DefaultBlockSyncConfig returns a default configuration for the block sync service
-func DefaultBlockSyncConfig() *BlockSyncConfig {
- return &BlockSyncConfig{
- Enable: true,
- Version: BlockSyncV0,
- }
-}
-
-// TestBlockSyncConfig returns a default configuration for the block sync.
-func TestBlockSyncConfig() *BlockSyncConfig {
- return DefaultBlockSyncConfig()
-}
-
-// ValidateBasic performs basic validation.
-func (cfg *BlockSyncConfig) ValidateBasic() error {
- switch cfg.Version {
- case BlockSyncV0:
- return nil
- case BlockSyncV2:
- return errors.New("blocksync version v2 is no longer supported. Please use v0")
- default:
- return fmt.Errorf("unknown blocksync version %s", cfg.Version)
- }
-}
-
//-----------------------------------------------------------------------------
// ConsensusConfig
@@ -1138,42 +993,22 @@ type ConsensusConfig struct {
WalPath string `mapstructure:"wal-file"`
walFile string // overrides WalPath if set
- // TODO: remove timeout configs, these should be global not local
- // How long we wait for a proposal block before prevoting nil
- TimeoutPropose time.Duration `mapstructure:"timeout-propose"`
- // How much timeout-propose increases with each round
- TimeoutProposeDelta time.Duration `mapstructure:"timeout-propose-delta"`
- // How long we wait after receiving +2/3 prevotes for “anything” (ie. not a single block or nil)
- TimeoutPrevote time.Duration `mapstructure:"timeout-prevote"`
- // How much the timeout-prevote increases with each round
- TimeoutPrevoteDelta time.Duration `mapstructure:"timeout-prevote-delta"`
- // How long we wait after receiving +2/3 precommits for “anything” (ie. not a single block or nil)
- TimeoutPrecommit time.Duration `mapstructure:"timeout-precommit"`
- // How much the timeout-precommit increases with each round
- TimeoutPrecommitDelta time.Duration `mapstructure:"timeout-precommit-delta"`
- // How long we wait after committing a block, before starting on the new
- // height (this gives us a chance to receive some more precommits, even
- // though we already have +2/3).
- TimeoutCommit time.Duration `mapstructure:"timeout-commit"`
+ // EmptyBlocks mode and possible interval between empty blocks
+ CreateEmptyBlocks bool `mapstructure:"create-empty-blocks"`
+ CreateEmptyBlocksInterval time.Duration `mapstructure:"create-empty-blocks-interval"`
+ // CreateProofBlockRange determines how many past blocks are inspected in order to determine if we need to create
+ // additional proof block.
+ CreateProofBlockRange int64 `mapstructure:"create-proof-block-range"`
// The proposed block time window is doubling of the value in twice
// that means for 10 sec the window will be 20 sec, 10 sec before NOW and 10 sec after
// this value is used to validate a block time
ProposedBlockTimeWindow time.Duration `mapstructure:"proposed-block-time-window"`
- // Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
- SkipTimeoutCommit bool `mapstructure:"skip-timeout-commit"`
// Don't propose a block if the node is set to the proposer, the block proposal instead
// has to be manual (useful for tests)
DontAutoPropose bool `mapstructure:"dont-auto-propose'"`
- // EmptyBlocks mode and possible interval between empty blocks
- CreateEmptyBlocks bool `mapstructure:"create-empty-blocks"`
- CreateEmptyBlocksInterval time.Duration `mapstructure:"create-empty-blocks-interval"`
- // CreateProofBlockRange determines how many past blocks are inspected in order to determine if we need to create
- // additional proof block.
- CreateProofBlockRange int64 `mapstructure:"create-proof-block-range"`
-
// Reactor sleep duration parameters
PeerGossipSleepDuration time.Duration `mapstructure:"peer-gossip-sleep-duration"`
PeerQueryMaj23SleepDuration time.Duration `mapstructure:"peer-query-maj23-sleep-duration"`
@@ -1183,22 +1018,59 @@ type ConsensusConfig struct {
QuorumType btcjson.LLMQType `mapstructure:"quorum-type"`
AppHashSize int `mapstructure:"app-hash-size"`
+
+ // TODO: The following fields are all temporary overrides that should exist only
+ // for the duration of the v0.36 release. The below fields should be completely
+ // removed in the v0.37 release of Tendermint.
+ // See: https://github.com/tendermint/tendermint/issues/8188
+
+ // UnsafeProposeTimeoutOverride provides an unsafe override of the Propose
+ // timeout consensus parameter. It configures how long the consensus engine
+ // will wait to receive a proposal block before prevoting nil.
+ UnsafeProposeTimeoutOverride time.Duration `mapstructure:"unsafe-propose-timeout-override"`
+ // UnsafeProposeTimeoutDeltaOverride provides an unsafe override of the
+ // ProposeDelta timeout consensus parameter. It configures how much the
+ // propose timeout increases with each round.
+ UnsafeProposeTimeoutDeltaOverride time.Duration `mapstructure:"unsafe-propose-timeout-delta-override"`
+ // UnsafeVoteTimeoutOverride provides an unsafe override of the Vote timeout
+ // consensus parameter. It configures how long the consensus engine will wait
+ // to gather additional votes after receiving +2/3 votes in a round.
+ UnsafeVoteTimeoutOverride time.Duration `mapstructure:"unsafe-vote-timeout-override"`
+ // UnsafeVoteTimeoutDeltaOverride provides an unsafe override of the VoteDelta
+ // timeout consensus parameter. It configures how much the vote timeout
+ // increases with each round.
+ UnsafeVoteTimeoutDeltaOverride time.Duration `mapstructure:"unsafe-vote-timeout-delta-override"`
+ // UnsafeCommitTimeoutOverride provides an unsafe override of the Commit timeout
+ // consensus parameter. It configures how long the consensus engine will wait
+ // after receiving +2/3 precommits before beginning the next height.
+ UnsafeCommitTimeoutOverride time.Duration `mapstructure:"unsafe-commit-timeout-override"`
+
+ // UnsafeBypassCommitTimeoutOverride provides an unsafe override of the
+ // BypassCommitTimeout consensus parameter. It configures if the consensus
+ // engine will wait for the full Commit timeout before proceeding to the next height.
+ // If it is set to true, the consensus engine will proceed to the next height
+ // as soon as the node has gathered votes from all of the validators on the network.
+ UnsafeBypassCommitTimeoutOverride *bool `mapstructure:"unsafe-bypass-commit-timeout-override"`
+
+ // Deprecated timeout parameters. These parameters are present in this struct
+ // so that they can be parsed so that validation can check if they have erroneously
+ // been included and provide a helpful error message.
+ // These fields should be completely removed in v0.37.
+ // See: https://github.com/tendermint/tendermint/issues/8188
+ DeprecatedTimeoutPropose *interface{} `mapstructure:"timeout-propose"`
+ DeprecatedTimeoutProposeDelta *interface{} `mapstructure:"timeout-propose-delta"`
+ DeprecatedTimeoutPrevote *interface{} `mapstructure:"timeout-prevote"`
+ DeprecatedTimeoutPrevoteDelta *interface{} `mapstructure:"timeout-prevote-delta"`
+ DeprecatedTimeoutPrecommit *interface{} `mapstructure:"timeout-precommit"`
+ DeprecatedTimeoutPrecommitDelta *interface{} `mapstructure:"timeout-precommit-delta"`
+ DeprecatedTimeoutCommit *interface{} `mapstructure:"timeout-commit"`
+ DeprecatedSkipTimeoutCommit *interface{} `mapstructure:"skip-timeout-commit"`
}
// DefaultConsensusConfig returns a default configuration for the consensus service
func DefaultConsensusConfig() *ConsensusConfig {
return &ConsensusConfig{
WalPath: filepath.Join(defaultDataDir, "cs.wal", "wal"),
- TimeoutPropose: 3000 * time.Millisecond,
- TimeoutProposeDelta: 500 * time.Millisecond,
- TimeoutPrevote: 1000 * time.Millisecond,
- TimeoutPrevoteDelta: 500 * time.Millisecond,
- TimeoutPrecommit: 1000 * time.Millisecond,
- TimeoutPrecommitDelta: 500 * time.Millisecond,
- TimeoutCommit: 1000 * time.Millisecond,
- ProposedBlockTimeWindow: 10 * time.Second,
- SkipTimeoutCommit: false,
- DontAutoPropose: false,
CreateEmptyBlocks: true,
CreateEmptyBlocksInterval: 0 * time.Second,
CreateProofBlockRange: 1,
@@ -1207,21 +1079,14 @@ func DefaultConsensusConfig() *ConsensusConfig {
DoubleSignCheckHeight: int64(0),
AppHashSize: crypto.DefaultAppHashSize,
QuorumType: btcjson.LLMQType_5_60,
+ ProposedBlockTimeWindow: 10 * time.Second,
+ DontAutoPropose: false,
}
}
// TestConsensusConfig returns a configuration for testing the consensus service
func TestConsensusConfig() *ConsensusConfig {
cfg := DefaultConsensusConfig()
- cfg.TimeoutPropose = 80 * time.Millisecond
- cfg.TimeoutProposeDelta = 5 * time.Millisecond
- cfg.TimeoutPrevote = 50 * time.Millisecond
- cfg.TimeoutPrevoteDelta = 5 * time.Millisecond
- cfg.TimeoutPrecommit = 50 * time.Millisecond
- cfg.TimeoutPrecommitDelta = 5 * time.Millisecond
- // NOTE: when modifying, make sure to update time_iota_ms (testGenesisFmt) in toml.go
- cfg.TimeoutCommit = 10 * time.Millisecond
- cfg.SkipTimeoutCommit = true
cfg.PeerGossipSleepDuration = 5 * time.Millisecond
cfg.PeerQueryMaj23SleepDuration = 250 * time.Millisecond
cfg.DoubleSignCheckHeight = int64(0)
@@ -1235,33 +1100,6 @@ func (cfg *ConsensusConfig) WaitForTxs() bool {
return !cfg.CreateEmptyBlocks || cfg.CreateEmptyBlocksInterval > 0
}
-// Propose returns the amount of time to wait for a proposal
-func (cfg *ConsensusConfig) Propose(round int32) time.Duration {
- return time.Duration(
- cfg.TimeoutPropose.Nanoseconds()+cfg.TimeoutProposeDelta.Nanoseconds()*int64(round),
- ) * time.Nanosecond
-}
-
-// Prevote returns the amount of time to wait for straggler votes after receiving any +2/3 prevotes
-func (cfg *ConsensusConfig) Prevote(round int32) time.Duration {
- return time.Duration(
- cfg.TimeoutPrevote.Nanoseconds()+cfg.TimeoutPrevoteDelta.Nanoseconds()*int64(round),
- ) * time.Nanosecond
-}
-
-// Precommit returns the amount of time to wait for straggler votes after receiving any +2/3 precommits
-func (cfg *ConsensusConfig) Precommit(round int32) time.Duration {
- return time.Duration(
- cfg.TimeoutPrecommit.Nanoseconds()+cfg.TimeoutPrecommitDelta.Nanoseconds()*int64(round),
- ) * time.Nanosecond
-}
-
-// Commit returns the amount of time to wait for straggler votes after receiving +2/3 precommits
-// for a single block (ie. a commit).
-func (cfg *ConsensusConfig) Commit(t time.Time) time.Time {
- return t.Add(cfg.TimeoutCommit)
-}
-
// WalFile returns the full path to the write-ahead log file
func (cfg *ConsensusConfig) WalFile() string {
if cfg.walFile != "" {
@@ -1278,26 +1116,20 @@ func (cfg *ConsensusConfig) SetWalFile(walFile string) {
// ValidateBasic performs basic validation (checking param bounds, etc.) and
// returns an error if any check fails.
func (cfg *ConsensusConfig) ValidateBasic() error {
- if cfg.TimeoutPropose < 0 {
- return errors.New("timeout-propose can't be negative")
+ if cfg.UnsafeProposeTimeoutOverride < 0 {
+ return errors.New("unsafe-propose-timeout-override can't be negative")
}
- if cfg.TimeoutProposeDelta < 0 {
- return errors.New("timeout-propose-delta can't be negative")
+ if cfg.UnsafeProposeTimeoutDeltaOverride < 0 {
+ return errors.New("unsafe-propose-timeout-delta-override can't be negative")
}
- if cfg.TimeoutPrevote < 0 {
- return errors.New("timeout-prevote can't be negative")
+ if cfg.UnsafeVoteTimeoutOverride < 0 {
+ return errors.New("unsafe-vote-timeout-override can't be negative")
}
- if cfg.TimeoutPrevoteDelta < 0 {
- return errors.New("timeout-prevote-delta can't be negative")
+ if cfg.UnsafeVoteTimeoutDeltaOverride < 0 {
+ return errors.New("unsafe-vote-timeout-delta-override can't be negative")
}
- if cfg.TimeoutPrecommit < 0 {
- return errors.New("timeout-precommit can't be negative")
- }
- if cfg.TimeoutPrecommitDelta < 0 {
- return errors.New("timeout-precommit-delta can't be negative")
- }
- if cfg.TimeoutCommit < 0 {
- return errors.New("timeout-commit can't be negative")
+ if cfg.UnsafeCommitTimeoutOverride < 0 {
+ return errors.New("unsafe-commit-timeout-override can't be negative")
}
if cfg.ProposedBlockTimeWindow < 0 {
return errors.New("proposed-block-time can't be negative")
@@ -1320,6 +1152,41 @@ func (cfg *ConsensusConfig) ValidateBasic() error {
return nil
}
+func (cfg *ConsensusConfig) DeprecatedFieldWarning() error {
+ var fields []string
+ if cfg.DeprecatedSkipTimeoutCommit != nil {
+ fields = append(fields, "skip-timeout-commit")
+ }
+ if cfg.DeprecatedTimeoutPropose != nil {
+ fields = append(fields, "timeout-propose")
+ }
+ if cfg.DeprecatedTimeoutProposeDelta != nil {
+ fields = append(fields, "timeout-propose-delta")
+ }
+ if cfg.DeprecatedTimeoutPrevote != nil {
+ fields = append(fields, "timeout-prevote")
+ }
+ if cfg.DeprecatedTimeoutPrevoteDelta != nil {
+ fields = append(fields, "timeout-prevote-delta")
+ }
+ if cfg.DeprecatedTimeoutPrecommit != nil {
+ fields = append(fields, "timeout-precommit")
+ }
+ if cfg.DeprecatedTimeoutPrecommitDelta != nil {
+ fields = append(fields, "timeout-precommit-delta")
+ }
+ if cfg.DeprecatedTimeoutCommit != nil {
+ fields = append(fields, "timeout-commit")
+ }
+ if len(fields) != 0 {
+ return fmt.Errorf("the following deprecated fields were set in the "+
+ "configuration file: %s. These fields were removed in v0.36. Timeout "+
+ "configuration has been moved to the ConsensusParams. For more information see "+
+ "https://tinyurl.com/adr074", strings.Join(fields, ", "))
+ }
+ return nil
+}
+
//-----------------------------------------------------------------------------
// TxIndexConfig
// Remember that Event has the following structure:
@@ -1336,9 +1203,8 @@ type TxIndexConfig struct {
// If list contains `null`, meaning no indexer service will be used.
//
// Options:
- // 1) "null" - no indexer services.
- // 2) "kv" (default) - the simplest possible indexer,
- // backed by key-value storage (defaults to levelDB; see DBBackend).
+ // 1) "null" (default) - no indexer services.
+ // 2) "kv" - a simple indexer backed by key-value storage (see DBBackend)
// 3) "psql" - the indexer services backed by PostgreSQL.
Indexer []string `mapstructure:"indexer"`
@@ -1349,14 +1215,12 @@ type TxIndexConfig struct {
// DefaultTxIndexConfig returns a default configuration for the transaction indexer.
func DefaultTxIndexConfig() *TxIndexConfig {
- return &TxIndexConfig{
- Indexer: []string{"kv"},
- }
+ return &TxIndexConfig{Indexer: []string{"null"}}
}
// TestTxIndexConfig returns a default configuration for the transaction indexer.
func TestTxIndexConfig() *TxIndexConfig {
- return DefaultTxIndexConfig()
+ return &TxIndexConfig{Indexer: []string{"kv"}}
}
//-----------------------------------------------------------------------------
diff --git a/config/config_test.go b/config/config_test.go
index aa536dd61f..a86ab84636 100644
--- a/config/config_test.go
+++ b/config/config_test.go
@@ -10,46 +10,43 @@ import (
)
func TestDefaultConfig(t *testing.T) {
- assert := assert.New(t)
-
// set up some defaults
cfg := DefaultConfig()
- assert.NotNil(cfg.P2P)
- assert.NotNil(cfg.Mempool)
- assert.NotNil(cfg.Consensus)
+ assert.NotNil(t, cfg.P2P)
+ assert.NotNil(t, cfg.Mempool)
+ assert.NotNil(t, cfg.Consensus)
// check the root dir stuff...
cfg.SetRoot("/foo")
cfg.Genesis = "bar"
cfg.DBPath = "/opt/data"
- assert.Equal("/foo/bar", cfg.GenesisFile())
- assert.Equal("/opt/data", cfg.DBDir())
+ assert.Equal(t, "/foo/bar", cfg.GenesisFile())
+ assert.Equal(t, "/opt/data", cfg.DBDir())
}
func TestConfigValidateBasic(t *testing.T) {
cfg := DefaultConfig()
assert.NoError(t, cfg.ValidateBasic())
- // tamper with timeout_propose
- cfg.Consensus.TimeoutPropose = -10 * time.Second
+ // tamper with unsafe-propose-timeout-override
+ cfg.Consensus.UnsafeProposeTimeoutOverride = -10 * time.Second
assert.Error(t, cfg.ValidateBasic())
}
func TestTLSConfiguration(t *testing.T) {
- assert := assert.New(t)
cfg := DefaultConfig()
cfg.SetRoot("/home/user")
cfg.RPC.TLSCertFile = "file.crt"
- assert.Equal("/home/user/config/file.crt", cfg.RPC.CertFile())
+ assert.Equal(t, "/home/user/config/file.crt", cfg.RPC.CertFile())
cfg.RPC.TLSKeyFile = "file.key"
- assert.Equal("/home/user/config/file.key", cfg.RPC.KeyFile())
+ assert.Equal(t, "/home/user/config/file.key", cfg.RPC.KeyFile())
cfg.RPC.TLSCertFile = "/abs/path/to/file.crt"
- assert.Equal("/abs/path/to/file.crt", cfg.RPC.CertFile())
+ assert.Equal(t, "/abs/path/to/file.crt", cfg.RPC.CertFile())
cfg.RPC.TLSKeyFile = "/abs/path/to/file.key"
- assert.Equal("/abs/path/to/file.key", cfg.RPC.KeyFile())
+ assert.Equal(t, "/abs/path/to/file.key", cfg.RPC.KeyFile())
}
func TestBaseConfigValidateBasic(t *testing.T) {
@@ -66,7 +63,6 @@ func TestRPCConfigValidateBasic(t *testing.T) {
assert.NoError(t, cfg.ValidateBasic())
fieldsToTest := []string{
- "GRPCMaxOpenConnections",
"MaxOpenConnections",
"MaxSubscriptionClients",
"MaxSubscriptionsPerClient",
@@ -82,26 +78,6 @@ func TestRPCConfigValidateBasic(t *testing.T) {
}
}
-func TestP2PConfigValidateBasic(t *testing.T) {
- cfg := TestP2PConfig()
- assert.NoError(t, cfg.ValidateBasic())
-
- fieldsToTest := []string{
- "MaxNumInboundPeers",
- "MaxNumOutboundPeers",
- "FlushThrottleTimeout",
- "MaxPacketMsgPayloadSize",
- "SendRate",
- "RecvRate",
- }
-
- for _, fieldName := range fieldsToTest {
- reflect.ValueOf(cfg).Elem().FieldByName(fieldName).SetInt(-1)
- assert.Error(t, cfg.ValidateBasic())
- reflect.ValueOf(cfg).Elem().FieldByName(fieldName).SetInt(0)
- }
-}
-
func TestMempoolConfigValidateBasic(t *testing.T) {
cfg := TestMempoolConfig()
assert.NoError(t, cfg.ValidateBasic())
@@ -125,42 +101,26 @@ func TestStateSyncConfigValidateBasic(t *testing.T) {
require.NoError(t, cfg.ValidateBasic())
}
-func TestBlockSyncConfigValidateBasic(t *testing.T) {
- cfg := TestBlockSyncConfig()
- assert.NoError(t, cfg.ValidateBasic())
-
- // tamper with version
- cfg.Version = "v2"
- assert.Error(t, cfg.ValidateBasic())
-
- cfg.Version = "invalid"
- assert.Error(t, cfg.ValidateBasic())
-}
-
func TestConsensusConfig_ValidateBasic(t *testing.T) {
testcases := map[string]struct {
modify func(*ConsensusConfig)
expectErr bool
}{
- "TimeoutPropose": {func(c *ConsensusConfig) { c.TimeoutPropose = time.Second }, false},
- "TimeoutPropose negative": {func(c *ConsensusConfig) { c.TimeoutPropose = -1 }, true},
- "TimeoutProposeDelta": {func(c *ConsensusConfig) { c.TimeoutProposeDelta = time.Second }, false},
- "TimeoutProposeDelta negative": {func(c *ConsensusConfig) { c.TimeoutProposeDelta = -1 }, true},
- "TimeoutPrevote": {func(c *ConsensusConfig) { c.TimeoutPrevote = time.Second }, false},
- "TimeoutPrevote negative": {func(c *ConsensusConfig) { c.TimeoutPrevote = -1 }, true},
- "TimeoutPrevoteDelta": {func(c *ConsensusConfig) { c.TimeoutPrevoteDelta = time.Second }, false},
- "TimeoutPrevoteDelta negative": {func(c *ConsensusConfig) { c.TimeoutPrevoteDelta = -1 }, true},
- "TimeoutPrecommit": {func(c *ConsensusConfig) { c.TimeoutPrecommit = time.Second }, false},
- "TimeoutPrecommit negative": {func(c *ConsensusConfig) { c.TimeoutPrecommit = -1 }, true},
- "TimeoutPrecommitDelta": {func(c *ConsensusConfig) { c.TimeoutPrecommitDelta = time.Second }, false},
- "TimeoutPrecommitDelta negative": {func(c *ConsensusConfig) { c.TimeoutPrecommitDelta = -1 }, true},
- "TimeoutCommit": {func(c *ConsensusConfig) { c.TimeoutCommit = time.Second }, false},
- "TimeoutCommit negative": {func(c *ConsensusConfig) { c.TimeoutCommit = -1 }, true},
- "PeerGossipSleepDuration": {func(c *ConsensusConfig) { c.PeerGossipSleepDuration = time.Second }, false},
- "PeerGossipSleepDuration negative": {func(c *ConsensusConfig) { c.PeerGossipSleepDuration = -1 }, true},
- "PeerQueryMaj23SleepDuration": {func(c *ConsensusConfig) { c.PeerQueryMaj23SleepDuration = time.Second }, false},
- "PeerQueryMaj23SleepDuration negative": {func(c *ConsensusConfig) { c.PeerQueryMaj23SleepDuration = -1 }, true},
- "DoubleSignCheckHeight negative": {func(c *ConsensusConfig) { c.DoubleSignCheckHeight = -1 }, true},
+ "UnsafeProposeTimeoutOverride": {func(c *ConsensusConfig) { c.UnsafeProposeTimeoutOverride = time.Second }, false},
+ "UnsafeProposeTimeoutOverride negative": {func(c *ConsensusConfig) { c.UnsafeProposeTimeoutOverride = -1 }, true},
+ "UnsafeProposeTimeoutDeltaOverride": {func(c *ConsensusConfig) { c.UnsafeProposeTimeoutDeltaOverride = time.Second }, false},
+ "UnsafeProposeTimeoutDeltaOverride negative": {func(c *ConsensusConfig) { c.UnsafeProposeTimeoutDeltaOverride = -1 }, true},
+ "UnsafePrevoteTimeoutOverride": {func(c *ConsensusConfig) { c.UnsafeVoteTimeoutOverride = time.Second }, false},
+ "UnsafePrevoteTimeoutOverride negative": {func(c *ConsensusConfig) { c.UnsafeVoteTimeoutOverride = -1 }, true},
+ "UnsafePrevoteTimeoutDeltaOverride": {func(c *ConsensusConfig) { c.UnsafeVoteTimeoutDeltaOverride = time.Second }, false},
+ "UnsafePrevoteTimeoutDeltaOverride negative": {func(c *ConsensusConfig) { c.UnsafeVoteTimeoutDeltaOverride = -1 }, true},
+ "UnsafeCommitTimeoutOverride": {func(c *ConsensusConfig) { c.UnsafeCommitTimeoutOverride = time.Second }, false},
+ "UnsafeCommitTimeoutOverride negative": {func(c *ConsensusConfig) { c.UnsafeCommitTimeoutOverride = -1 }, true},
+ "PeerGossipSleepDuration": {func(c *ConsensusConfig) { c.PeerGossipSleepDuration = time.Second }, false},
+ "PeerGossipSleepDuration negative": {func(c *ConsensusConfig) { c.PeerGossipSleepDuration = -1 }, true},
+ "PeerQueryMaj23SleepDuration": {func(c *ConsensusConfig) { c.PeerQueryMaj23SleepDuration = time.Second }, false},
+ "PeerQueryMaj23SleepDuration negative": {func(c *ConsensusConfig) { c.PeerQueryMaj23SleepDuration = -1 }, true},
+ "DoubleSignCheckHeight negative": {func(c *ConsensusConfig) { c.DoubleSignCheckHeight = -1 }, true},
}
for desc, tc := range testcases {
tc := tc // appease linter
@@ -186,3 +146,21 @@ func TestInstrumentationConfigValidateBasic(t *testing.T) {
cfg.MaxOpenConnections = -1
assert.Error(t, cfg.ValidateBasic())
}
+
+func TestP2PConfigValidateBasic(t *testing.T) {
+ cfg := TestP2PConfig()
+ assert.NoError(t, cfg.ValidateBasic())
+
+ fieldsToTest := []string{
+ "FlushThrottleTimeout",
+ "MaxPacketMsgPayloadSize",
+ "SendRate",
+ "RecvRate",
+ }
+
+ for _, fieldName := range fieldsToTest {
+ reflect.ValueOf(cfg).Elem().FieldByName(fieldName).SetInt(-1)
+ assert.Error(t, cfg.ValidateBasic())
+ reflect.ValueOf(cfg).Elem().FieldByName(fieldName).SetInt(0)
+ }
+}
diff --git a/config/db.go b/config/db.go
index 8f489a87aa..f508354e07 100644
--- a/config/db.go
+++ b/config/db.go
@@ -1,6 +1,8 @@
package config
import (
+ "context"
+
dbm "github.com/tendermint/tm-db"
"github.com/tendermint/tendermint/libs/log"
@@ -8,7 +10,7 @@ import (
)
// ServiceProvider takes a config and a logger and returns a ready to go Node.
-type ServiceProvider func(*Config, log.Logger) (service.Service, error)
+type ServiceProvider func(context.Context, *Config, log.Logger) (service.Service, error)
// DBContext specifies config information for loading a new DB.
type DBContext struct {
diff --git a/config/toml.go b/config/toml.go
index d5b432a7c6..ee5df22f6a 100644
--- a/config/toml.go
+++ b/config/toml.go
@@ -3,17 +3,17 @@ package config
import (
"bytes"
"fmt"
- "io/ioutil"
"os"
"path/filepath"
"strings"
"text/template"
tmos "github.com/tendermint/tendermint/libs/os"
+ tmrand "github.com/tendermint/tendermint/libs/rand"
)
-// DefaultDirPerm is the default permissions used when creating directories.
-const DefaultDirPerm = 0700
+// defaultDirPerm is the default permissions used when creating directories.
+const defaultDirPerm = 0700
var configTemplate *template.Template
@@ -32,13 +32,13 @@ func init() {
// EnsureRoot creates the root, config, and data directories if they don't exist,
// and panics if it fails.
func EnsureRoot(rootDir string) {
- if err := tmos.EnsureDir(rootDir, DefaultDirPerm); err != nil {
+ if err := tmos.EnsureDir(rootDir, defaultDirPerm); err != nil {
panic(err.Error())
}
- if err := tmos.EnsureDir(filepath.Join(rootDir, defaultConfigDir), DefaultDirPerm); err != nil {
+ if err := tmos.EnsureDir(filepath.Join(rootDir, defaultConfigDir), defaultDirPerm); err != nil {
panic(err.Error())
}
- if err := tmos.EnsureDir(filepath.Join(rootDir, defaultDataDir), DefaultDirPerm); err != nil {
+ if err := tmos.EnsureDir(filepath.Join(rootDir, defaultDataDir), defaultDirPerm); err != nil {
panic(err.Error())
}
}
@@ -209,26 +209,10 @@ cors-allowed-methods = [{{ range .RPC.CORSAllowedMethods }}{{ printf "%q, " . }}
# A list of non simple headers the client is allowed to use with cross-domain requests
cors-allowed-headers = [{{ range .RPC.CORSAllowedHeaders }}{{ printf "%q, " . }}{{end}}]
-# TCP or UNIX socket address for the gRPC server to listen on
-# NOTE: This server only supports /broadcast_tx_commit
-# Deprecated gRPC in the RPC layer of Tendermint will be deprecated in 0.36.
-grpc-laddr = "{{ .RPC.GRPCListenAddress }}"
-
-# Maximum number of simultaneous connections.
-# Does not include RPC (HTTP&WebSocket) connections. See max-open-connections
-# If you want to accept a larger number than the default, make sure
-# you increase your OS limits.
-# 0 - unlimited.
-# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
-# 1024 - 40 - 10 - 50 = 924 = ~900
-# Deprecated gRPC in the RPC layer of Tendermint will be deprecated in 0.36.
-grpc-max-open-connections = {{ .RPC.GRPCMaxOpenConnections }}
-
# Activate unsafe RPC commands like /dial-seeds and /unsafe-flush-mempool
unsafe = {{ .RPC.Unsafe }}
# Maximum number of simultaneous connections (including WebSocket).
-# Does not include gRPC connections. See grpc-max-open-connections
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
@@ -242,36 +226,36 @@ max-open-connections = {{ .RPC.MaxOpenConnections }}
max-subscription-clients = {{ .RPC.MaxSubscriptionClients }}
# Maximum number of unique queries a given client can /subscribe to
-# If you're using GRPC (or Local RPC client) and /broadcast_tx_commit, set to
-# the estimated # maximum number of broadcast_tx_commit calls per block.
+# If you're using a Local RPC client and /broadcast_tx_commit, set this
+# to the estimated maximum number of broadcast_tx_commit calls per block.
max-subscriptions-per-client = {{ .RPC.MaxSubscriptionsPerClient }}
-# Experimental parameter to specify the maximum number of events a node will
-# buffer, per subscription, before returning an error and closing the
-# subscription. Must be set to at least 100, but higher values will accommodate
-# higher event throughput rates (and will use more memory).
-experimental-subscription-buffer-size = {{ .RPC.SubscriptionBufferSize }}
-
-# Experimental parameter to specify the maximum number of RPC responses that
-# can be buffered per WebSocket client. If clients cannot read from the
-# WebSocket endpoint fast enough, they will be disconnected, so increasing this
-# parameter may reduce the chances of them being disconnected (but will cause
-# the node to use more memory).
+# If true, disable the websocket interface to the RPC service. This has
+# the effect of disabling the /subscribe, /unsubscribe, and /unsubscribe_all
+# methods for event subscription.
#
-# Must be at least the same as "experimental-subscription-buffer-size",
-# otherwise connections could be dropped unnecessarily. This value should
-# ideally be somewhat higher than "experimental-subscription-buffer-size" to
-# accommodate non-subscription-related RPC responses.
-experimental-websocket-write-buffer-size = {{ .RPC.WebSocketWriteBufferSize }}
-
-# If a WebSocket client cannot read fast enough, at present we may
-# silently drop events instead of generating an error or disconnecting the
-# client.
+# EXPERIMENTAL: This setting will be removed in Tendermint v0.37.
+experimental-disable-websocket = {{ .RPC.ExperimentalDisableWebsocket }}
+
+# The time window size for the event log. All events up to this long before
+# the latest (up to EventLogMaxItems) will be available for subscribers to
+# fetch via the /events method. If 0 (the default) the event log and the
+# /events RPC method are disabled.
+event-log-window-size = "{{ .RPC.EventLogWindowSize }}"
+
+# The maxiumum number of events that may be retained by the event log. If
+# this value is 0, no upper limit is set. Otherwise, items in excess of
+# this number will be discarded from the event log.
#
-# Enabling this experimental parameter will cause the WebSocket connection to
-# be closed instead if it cannot read fast enough, allowing for greater
-# predictability in subscription behavior.
-experimental-close-on-slow-client = {{ .RPC.CloseOnSlowClient }}
+# Warning: This setting is a safety valve. Setting it too low may cause
+# subscribers to miss events. Try to choose a value higher than the
+# maximum worst-case expected event load within the chosen window size in
+# ordinary operation.
+#
+# For example, if the window size is 10 minutes and the node typically
+# averages 1000 events per ten minutes, but with occasional known spikes of
+# up to 2000, choose a value > 2000.
+event-log-max-items = {{ .RPC.EventLogMaxItems }}
# How long to wait for a tx to be committed during /broadcast_tx_commit.
# WARNING: Using a value larger than 10s will result in increasing the
@@ -308,9 +292,6 @@ pprof-laddr = "{{ .RPC.PprofListenAddress }}"
#######################################################
[p2p]
-# Enable the legacy p2p layer.
-use-legacy = {{ .P2P.UseLegacy }}
-
# Select the p2p internal queue
queue-type = "{{ .P2P.QueueType }}"
@@ -342,86 +323,48 @@ persistent-peers = "{{ .P2P.PersistentPeers }}"
# UPNP port forwarding
upnp = {{ .P2P.UPNP }}
-# Path to address book
-# TODO: Remove once p2p refactor is complete in favor of peer store.
-addr-book-file = "{{ js .P2P.AddrBook }}"
-
-# Set true for strict address routability rules
-# Set false for private or local networks
-addr-book-strict = {{ .P2P.AddrBookStrict }}
-
-# Maximum number of inbound peers
-#
-# TODO: Remove once p2p refactor is complete in favor of MaxConnections.
-# ref: https://github.com/tendermint/tendermint/issues/5670
-max-num-inbound-peers = {{ .P2P.MaxNumInboundPeers }}
-
-# Maximum number of outbound peers to connect to, excluding persistent peers
-#
-# TODO: Remove once p2p refactor is complete in favor of MaxConnections.
-# ref: https://github.com/tendermint/tendermint/issues/5670
-max-num-outbound-peers = {{ .P2P.MaxNumOutboundPeers }}
-
# Maximum number of connections (inbound and outbound).
max-connections = {{ .P2P.MaxConnections }}
# Rate limits the number of incoming connection attempts per IP address.
max-incoming-connection-attempts = {{ .P2P.MaxIncomingConnectionAttempts }}
-# List of node IDs, to which a connection will be (re)established ignoring any existing limits
-# TODO: Remove once p2p refactor is complete.
-# ref: https://github.com/tendermint/tendermint/issues/5670
-unconditional-peer-ids = "{{ .P2P.UnconditionalPeerIDs }}"
+# Set true to enable the peer-exchange reactor
+pex = {{ .P2P.PexReactor }}
-# Maximum pause when redialing a persistent peer (if zero, exponential backoff is used)
-# TODO: Remove once p2p refactor is complete
-# ref: https:#github.com/tendermint/tendermint/issues/5670
-persistent-peers-max-dial-period = "{{ .P2P.PersistentPeersMaxDialPeriod }}"
+# Comma separated list of peer IDs to keep private (will not be gossiped to other peers)
+# Warning: IPs will be exposed at /net_info, for more information https://github.com/tendermint/tendermint/issues/3055
+private-peer-ids = "{{ .P2P.PrivatePeerIDs }}"
+
+# Toggle to disable guard against peers connecting from the same ip.
+allow-duplicate-ip = {{ .P2P.AllowDuplicateIP }}
+
+# Peer connection configuration.
+handshake-timeout = "{{ .P2P.HandshakeTimeout }}"
+dial-timeout = "{{ .P2P.DialTimeout }}"
# Time to wait before flushing messages out on the connection
-# TODO: Remove once p2p refactor is complete
-# ref: https:#github.com/tendermint/tendermint/issues/5670
+# TODO: Remove once MConnConnection is removed.
flush-throttle-timeout = "{{ .P2P.FlushThrottleTimeout }}"
# Maximum size of a message packet payload, in bytes
-# TODO: Remove once p2p refactor is complete
-# ref: https:#github.com/tendermint/tendermint/issues/5670
+# TODO: Remove once MConnConnection is removed.
max-packet-msg-payload-size = {{ .P2P.MaxPacketMsgPayloadSize }}
# Rate at which packets can be sent, in bytes/second
-# TODO: Remove once p2p refactor is complete
-# ref: https:#github.com/tendermint/tendermint/issues/5670
+# TODO: Remove once MConnConnection is removed.
send-rate = {{ .P2P.SendRate }}
# Rate at which packets can be received, in bytes/second
-# TODO: Remove once p2p refactor is complete
-# ref: https:#github.com/tendermint/tendermint/issues/5670
+# TODO: Remove once MConnConnection is removed.
recv-rate = {{ .P2P.RecvRate }}
-# Set true to enable the peer-exchange reactor
-pex = {{ .P2P.PexReactor }}
-
-# Comma separated list of peer IDs to keep private (will not be gossiped to other peers)
-# Warning: IPs will be exposed at /net_info, for more information https://github.com/tendermint/tendermint/issues/3055
-private-peer-ids = "{{ .P2P.PrivatePeerIDs }}"
-
-# Toggle to disable guard against peers connecting from the same ip.
-allow-duplicate-ip = {{ .P2P.AllowDuplicateIP }}
-
-# Peer connection configuration.
-handshake-timeout = "{{ .P2P.HandshakeTimeout }}"
-dial-timeout = "{{ .P2P.DialTimeout }}"
#######################################################
### Mempool Configuration Option ###
#######################################################
[mempool]
-# Mempool version to use:
-# 1) "v0" - The legacy non-prioritized mempool reactor.
-# 2) "v1" (default) - The prioritized mempool reactor.
-version = "{{ .Mempool.Version }}"
-
recheck = {{ .Mempool.Recheck }}
broadcast = {{ .Mempool.Broadcast }}
@@ -510,21 +453,6 @@ chunk-request-timeout = "{{ .StateSync.ChunkRequestTimeout }}"
# The number of concurrent chunk and block fetchers to run (default: 4).
fetchers = "{{ .StateSync.Fetchers }}"
-#######################################################
-### Block Sync Configuration Connections ###
-#######################################################
-[blocksync]
-
-# If this node is many blocks behind the tip of the chain, BlockSync
-# allows them to catchup quickly by downloading blocks in parallel
-# and verifying their commits
-enable = {{ .BlockSync.Enable }}
-
-# Block Sync version to use:
-# 1) "v0" (default) - the standard Block Sync implementation
-# 2) "v2" - DEPRECATED, please use v0
-version = "{{ .BlockSync.Version }}"
-
#######################################################
### Consensus Configuration Options ###
#######################################################
@@ -532,22 +460,6 @@ version = "{{ .BlockSync.Version }}"
wal-file = "{{ js .Consensus.WalPath }}"
-# How long we wait for a proposal block before prevoting nil
-timeout-propose = "{{ .Consensus.TimeoutPropose }}"
-# How much timeout-propose increases with each round
-timeout-propose-delta = "{{ .Consensus.TimeoutProposeDelta }}"
-# How long we wait after receiving +2/3 prevotes for “anything” (ie. not a single block or nil)
-timeout-prevote = "{{ .Consensus.TimeoutPrevote }}"
-# How much the timeout-prevote increases with each round
-timeout-prevote-delta = "{{ .Consensus.TimeoutPrevoteDelta }}"
-# How long we wait after receiving +2/3 precommits for “anything” (ie. not a single block or nil)
-timeout-precommit = "{{ .Consensus.TimeoutPrecommit }}"
-# How much the timeout-precommit increases with each round
-timeout-precommit-delta = "{{ .Consensus.TimeoutPrecommitDelta }}"
-# How long we wait after committing a block, before starting on the new
-# height (this gives us a chance to receive some more precommits, even
-# though we already have +2/3).
-timeout-commit = "{{ .Consensus.TimeoutCommit }}"
# How long is the window for the min proposed block time
proposed-block-time-window = "{{ .Consensus.ProposedBlockTimeWindow }}"
@@ -557,9 +469,6 @@ proposed-block-time-window = "{{ .Consensus.ProposedBlockTimeWindow }}"
# So, validators should stop the state machine, wait for some blocks, and then restart the state machine to avoid panic.
double-sign-check-height = {{ .Consensus.DoubleSignCheckHeight }}
-# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
-skip-timeout-commit = {{ .Consensus.SkipTimeoutCommit }}
-
# EmptyBlocks mode and possible interval between empty blocks
create-empty-blocks = {{ .Consensus.CreateEmptyBlocks }}
create-empty-blocks-interval = "{{ .Consensus.CreateEmptyBlocksInterval }}"
@@ -571,6 +480,50 @@ create-proof-block-range = "{{ .Consensus.CreateProofBlockRange }}"
peer-gossip-sleep-duration = "{{ .Consensus.PeerGossipSleepDuration }}"
peer-query-maj23-sleep-duration = "{{ .Consensus.PeerQueryMaj23SleepDuration }}"
+### Unsafe Timeout Overrides ###
+
+# These fields provide temporary overrides for the Timeout consensus parameters.
+# Use of these parameters is strongly discouraged. Using these parameters may have serious
+# liveness implications for the validator and for the chain.
+#
+# These fields will be removed from the configuration file in the v0.37 release of Tendermint.
+# For additional information, see ADR-74:
+# https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-074-timeout-params.md
+
+# This field provides an unsafe override of the Propose timeout consensus parameter.
+# This field configures how long the consensus engine will wait for a proposal block before prevoting nil.
+# If this field is set to a value greater than 0, it will take effect.
+# unsafe-propose-timeout-override = {{ .Consensus.UnsafeProposeTimeoutOverride }}
+
+# This field provides an unsafe override of the ProposeDelta timeout consensus parameter.
+# This field configures how much the propose timeout increases with each round.
+# If this field is set to a value greater than 0, it will take effect.
+# unsafe-propose-timeout-delta-override = {{ .Consensus.UnsafeProposeTimeoutDeltaOverride }}
+
+# This field provides an unsafe override of the Vote timeout consensus parameter.
+# This field configures how long the consensus engine will wait after
+# receiving +2/3 votes in a round.
+# If this field is set to a value greater than 0, it will take effect.
+# unsafe-vote-timeout-override = {{ .Consensus.UnsafeVoteTimeoutOverride }}
+
+# This field provides an unsafe override of the VoteDelta timeout consensus parameter.
+# This field configures how much the vote timeout increases with each round.
+# If this field is set to a value greater than 0, it will take effect.
+# unsafe-vote-timeout-delta-override = {{ .Consensus.UnsafeVoteTimeoutDeltaOverride }}
+
+# This field provides an unsafe override of the Commit timeout consensus parameter.
+# This field configures how long the consensus engine will wait after receiving
+# +2/3 precommits before beginning the next height.
+# If this field is set to a value greater than 0, it will take effect.
+# unsafe-commit-timeout-override = {{ .Consensus.UnsafeCommitTimeoutOverride }}
+
+# This field provides an unsafe override of the BypassCommitTimeout consensus parameter.
+# This field configures if the consensus engine will wait for the full Commit timeout
+# before proceeding to the next height.
+# If this field is set to true, the consensus engine will proceed to the next height
+# as soon as the node has gathered votes from all of the validators on the network.
+# unsafe-bypass-commit-timeout-override =
+
# Signing parameters
quorum-type = "{{ .Consensus.QuorumType }}"
@@ -589,8 +542,8 @@ app-hash-size = "{{ .Consensus.AppHashSize }}"
# to decide which txs to index based on configuration set in the application.
#
# Options:
-# 1) "null"
-# 2) "kv" (default) - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
+# 1) "null" (default) - no indexer services.
+# 2) "kv" - a simple indexer backed by key-value storage (see DBBackend)
# 3) "psql" - the indexer services backed by PostgreSQL.
# When "kv" or "psql" is chosen "tx.height" and "tx.hash" will always be indexed.
indexer = [{{ range $i, $e := .TxIndex.Indexer }}{{if $i}}, {{end}}{{ printf "%q" $e}}{{end}}]
@@ -624,21 +577,21 @@ namespace = "{{ .Instrumentation.Namespace }}"
/****** these are for test settings ***********/
-func ResetTestRoot(testName string) (*Config, error) {
- return ResetTestRootWithChainID(testName, "")
+func ResetTestRoot(dir, testName string) (*Config, error) {
+ return ResetTestRootWithChainID(dir, testName, "")
}
-func ResetTestRootWithChainID(testName string, chainID string) (*Config, error) {
+func ResetTestRootWithChainID(dir, testName string, chainID string) (*Config, error) {
// create a unique, concurrency-safe test directory under os.TempDir()
- rootDir, err := ioutil.TempDir("", fmt.Sprintf("%s-%s_", chainID, testName))
+ rootDir, err := os.MkdirTemp(dir, fmt.Sprintf("%s-%s_", chainID, testName))
if err != nil {
return nil, err
}
// ensure config and data subdirs are created
- if err := tmos.EnsureDir(filepath.Join(rootDir, defaultConfigDir), DefaultDirPerm); err != nil {
+ if err := tmos.EnsureDir(filepath.Join(rootDir, defaultConfigDir), defaultDirPerm); err != nil {
return nil, err
}
- if err := tmos.EnsureDir(filepath.Join(rootDir, defaultDataDir), DefaultDirPerm); err != nil {
+ if err := tmos.EnsureDir(filepath.Join(rootDir, defaultDataDir), defaultDirPerm); err != nil {
return nil, err
}
@@ -670,17 +623,18 @@ func ResetTestRootWithChainID(testName string, chainID string) (*Config, error)
}
config := TestConfig().SetRoot(rootDir)
+ config.Instrumentation.Namespace = fmt.Sprintf("%s_%s_%s", testName, chainID, tmrand.Str(16))
return config, nil
}
func writeFile(filePath string, contents []byte, mode os.FileMode) error {
- if err := ioutil.WriteFile(filePath, contents, mode); err != nil {
+ if err := os.WriteFile(filePath, contents, mode); err != nil {
return fmt.Errorf("failed to write file: %w", err)
}
return nil
}
-var testGenesisFmt = `{
+const testGenesisFmt = `{
"genesis_time": "2018-10-10T08:20:13.695936996Z",
"chain_id": "%s",
"initial_height": "1",
@@ -691,6 +645,18 @@ var testGenesisFmt = `{
"max_gas": "-1",
"time_iota_ms": "10"
},
+ "synchrony": {
+ "message_delay": "500000000",
+ "precision": "10000000"
+ },
+ "timeout": {
+ "propose": "30000000",
+ "propose_delta": "50000",
+ "vote": "30000000",
+ "vote_delta": "50000",
+ "commit": "10000000",
+ "bypass_timeout_commit": true
+ },
"evidence": {
"max_age_num_blocks": "100000",
"max_age_duration": "172800000000000",
@@ -709,7 +675,7 @@ var testGenesisFmt = `{
"type": "tendermint/PubKeyBLS12381",
"value":"F5BjXeh0DppqaxX7a3LzoWr6CXPZcZeba6VHYdbiUCxQ23b00mFD8FRZpCz9Ug1E"
},
- "power": "100",
+ "power": 100,
"name": "",
"pro_tx_hash": "51BF39CC1F41B9FC63DFA5B1EDF3F0CA3AD5CAFAE4B12B4FE9263B08BB50C45F"
}
@@ -744,7 +710,7 @@ var testPrivValidatorKey = `{
"pro_tx_hash": "51BF39CC1F41B9FC63DFA5B1EDF3F0CA3AD5CAFAE4B12B4FE9263B08BB50C45F"
}`
-var testPrivValidatorState = `{
+const testPrivValidatorState = `{
"height": "0",
"round": 0,
"step": 0
diff --git a/config/toml_test.go b/config/toml_test.go
index 26376b72d2..cf27c4484a 100644
--- a/config/toml_test.go
+++ b/config/toml_test.go
@@ -1,7 +1,6 @@
package config
import (
- "io/ioutil"
"os"
"path/filepath"
"strings"
@@ -15,26 +14,22 @@ func ensureFiles(t *testing.T, rootDir string, files ...string) {
for _, f := range files {
p := rootify(rootDir, f)
_, err := os.Stat(p)
- assert.Nil(t, err, p)
+ assert.NoError(t, err, p)
}
}
func TestEnsureRoot(t *testing.T) {
- require := require.New(t)
-
// setup temp dir for test
- tmpDir, err := ioutil.TempDir("", "config-test")
- require.NoError(err)
- defer os.RemoveAll(tmpDir)
+ tmpDir := t.TempDir()
// create root dir
EnsureRoot(tmpDir)
- require.NoError(WriteConfigFile(tmpDir, DefaultConfig()))
+ require.NoError(t, WriteConfigFile(tmpDir, DefaultConfig()))
// make sure config is set properly
- data, err := ioutil.ReadFile(filepath.Join(tmpDir, defaultConfigFilePath))
- require.NoError(err)
+ data, err := os.ReadFile(filepath.Join(tmpDir, defaultConfigFilePath))
+ require.NoError(t, err)
checkConfig(t, string(data))
@@ -42,19 +37,17 @@ func TestEnsureRoot(t *testing.T) {
}
func TestEnsureTestRoot(t *testing.T) {
- require := require.New(t)
-
testName := "ensureTestRoot"
// create root dir
- cfg, err := ResetTestRoot(testName)
- require.NoError(err)
+ cfg, err := ResetTestRoot(t.TempDir(), testName)
+ require.NoError(t, err)
defer os.RemoveAll(cfg.RootDir)
rootDir := cfg.RootDir
// make sure config is set properly
- data, err := ioutil.ReadFile(filepath.Join(rootDir, defaultConfigFilePath))
- require.Nil(err)
+ data, err := os.ReadFile(filepath.Join(rootDir, defaultConfigFilePath))
+ require.NoError(t, err)
checkConfig(t, string(data))
@@ -71,7 +64,6 @@ func checkConfig(t *testing.T, configFile string) {
"moniker",
"seeds",
"proxy-app",
- "blocksync",
"create-empty-blocks",
"peer",
"timeout",
diff --git a/crypto/README.md b/crypto/README.md
index 20346d7155..d60628d970 100644
--- a/crypto/README.md
+++ b/crypto/README.md
@@ -12,7 +12,7 @@ For any specific algorithm, use its specific module e.g.
## Binary encoding
-For Binary encoding, please refer to the [Tendermint encoding specification](https://docs.tendermint.com/master/spec/blockchain/encoding.html).
+For Binary encoding, please refer to the [Tendermint encoding specification](https://docs.tendermint.com/master/spec/core/encoding.html).
## JSON Encoding
diff --git a/crypto/bls12381/bls12381.go b/crypto/bls12381/bls12381.go
index e6297bd29a..cb554e8418 100644
--- a/crypto/bls12381/bls12381.go
+++ b/crypto/bls12381/bls12381.go
@@ -2,6 +2,8 @@ package bls12381
import (
"bytes"
+ "crypto/rand"
+ "crypto/sha256"
"crypto/subtle"
"encoding/hex"
"errors"
@@ -9,11 +11,10 @@ import (
"io"
bls "github.com/dashpay/bls-signatures/go-bindings"
+ "github.com/tendermint/tendermint/internal/jsontypes"
"github.com/tendermint/tendermint/crypto"
- "github.com/tendermint/tendermint/crypto/tmhash"
tmbytes "github.com/tendermint/tendermint/libs/bytes"
- tmjson "github.com/tendermint/tendermint/libs/json"
)
//-------------------------------------
@@ -48,13 +49,16 @@ var (
)
func init() {
- tmjson.RegisterType(PubKey{}, PubKeyName)
- tmjson.RegisterType(PrivKey{}, PrivKeyName)
+ jsontypes.MustRegister(PubKey{})
+ jsontypes.MustRegister(PrivKey{})
}
// PrivKey implements crypto.PrivKey.
type PrivKey []byte
+// TypeTag satisfies the jsontypes.Tagged interface.
+func (PrivKey) TypeTag() string { return PrivKeyName }
+
// Bytes returns the privkey byte format.
func (privKey PrivKey) Bytes() []byte {
return privKey
@@ -145,7 +149,7 @@ func (privKey PrivKey) TypeValue() crypto.KeyType {
// It uses OS randomness in conjunction with the current global random seed
// in tendermint/libs/common to generate the private key.
func GenPrivKey() PrivKey {
- return genPrivKey(crypto.CReader())
+ return genPrivKey(rand.Reader)
}
// genPrivKey generates a new bls12381 private key using the provided reader.
@@ -168,8 +172,8 @@ func genPrivKey(rand io.Reader) PrivKey {
// NOTE: secret should be the output of a KDF like bcrypt,
// if it's derived from user input.
func GenPrivKeyFromSecret(secret []byte) PrivKey {
- seed := crypto.Sha256(secret) // Not Ripemd160 because we want 32 bytes.
- privKey, err := bls.PrivateKeyFromSeed(seed)
+ seed := sha256.Sum256(secret) // Not Ripemd160 because we want 32 bytes.
+ privKey, err := bls.PrivateKeyFromSeed(seed[:])
if err != nil {
panic(err)
}
@@ -205,7 +209,7 @@ func RecoverThresholdPublicKeyFromPublicKeys(publicKeys []crypto.PubKey, blsIds
}
for i, blsID := range blsIds {
- if len(blsID) != tmhash.Size {
+ if len(blsID) != crypto.HashSize {
return nil, fmt.Errorf("blsID incorrect size in public key recovery, expected 32 bytes (got %d)", len(blsID))
}
var hash bls.Hash
@@ -241,7 +245,7 @@ func RecoverThresholdSignatureFromShares(sigSharesData [][]byte, blsIds [][]byte
}
for i, blsID := range blsIds {
- if len(blsID) != tmhash.Size {
+ if len(blsID) != crypto.HashSize {
return nil, fmt.Errorf("blsID incorrect size in signature recovery, expected 32 bytes (got %d)", len(blsID))
}
var hash bls.Hash
@@ -263,12 +267,15 @@ var _ crypto.PubKey = PubKey{}
// PubKey PubKeyBLS12381 implements crypto.PubKey for the bls12381 signature scheme.
type PubKey []byte
+// TypeTag satisfies the jsontypes.Tagged interface.
+func (PubKey) TypeTag() string { return PubKeyName }
+
// Address is the SHA256-20 of the raw pubkey bytes.
func (pubKey PubKey) Address() crypto.Address {
if len(pubKey) != PubKeySize {
panic("pubkey is incorrect size")
}
- return tmhash.SumTruncated(pubKey)
+ return crypto.AddressHash(pubKey)
}
// Bytes returns the PubKey byte format.
diff --git a/crypto/crypto.go b/crypto/crypto.go
index 9c5073a1f2..7572d2d074 100644
--- a/crypto/crypto.go
+++ b/crypto/crypto.go
@@ -2,18 +2,23 @@ package crypto
import (
"bytes"
+ "crypto/sha256"
+ "encoding/json"
"errors"
"fmt"
"github.com/dashevo/dashd-go/btcjson"
- "github.com/tendermint/tendermint/crypto/tmhash"
+ "github.com/tendermint/tendermint/internal/jsontypes"
tmbytes "github.com/tendermint/tendermint/libs/bytes"
)
const (
+ // HashSize is the size in bytes of an AddressHash.
+ HashSize = sha256.Size
+
// AddressSize is the size of a pubkey address.
- AddressSize = tmhash.TruncatedSize
+ AddressSize = 20
DefaultHashSize = 32
LargeAppHashSize = DefaultHashSize
SmallAppHashSize = 20
@@ -45,8 +50,23 @@ type ProTxHash = tmbytes.HexBytes
type QuorumHash = tmbytes.HexBytes
+// AddressHash computes a truncated SHA-256 hash of bz for use as
+// a peer address.
+//
+// See: https://docs.tendermint.com/master/spec/core/data_structures.html#address
+func AddressHash(bz []byte) Address {
+ h := sha256.Sum256(bz)
+ return Address(h[:AddressSize])
+}
+
+// Checksum returns the SHA256 of the bz.
+func Checksum(bz []byte) []byte {
+ h := sha256.Sum256(bz)
+ return h[:]
+}
+
func ProTxHashFromSeedBytes(bz []byte) ProTxHash {
- return tmhash.Sum(bz)
+ return Checksum(bz)
}
func RandProTxHash() ProTxHash {
@@ -98,9 +118,50 @@ func (sptxh SortProTxHash) Swap(i, j int) {
}
type QuorumKeys struct {
- PrivKey PrivKey `json:"priv_key"`
- PubKey PubKey `json:"pub_key"`
- ThresholdPublicKey PubKey `json:"threshold_public_key"`
+ PrivKey PrivKey
+ PubKey PubKey
+ ThresholdPublicKey PubKey
+}
+
+type quorumKeysJSON struct {
+ PrivKey json.RawMessage `json:"priv_key"`
+ PubKey json.RawMessage `json:"pub_key"`
+ ThresholdPublicKey json.RawMessage `json:"threshold_public_key"`
+}
+
+func (pvKey QuorumKeys) MarshalJSON() ([]byte, error) {
+ var keys quorumKeysJSON
+ var err error
+ keys.PrivKey, err = jsontypes.Marshal(pvKey.PrivKey)
+ if err != nil {
+ return nil, err
+ }
+ keys.PubKey, err = jsontypes.Marshal(pvKey.PubKey)
+ if err != nil {
+ return nil, err
+ }
+ keys.ThresholdPublicKey, err = jsontypes.Marshal(pvKey.ThresholdPublicKey)
+ if err != nil {
+ return nil, err
+ }
+ return json.Marshal(keys)
+}
+
+func (pvKey *QuorumKeys) UnmarshalJSON(data []byte) error {
+ var keys quorumKeysJSON
+ err := json.Unmarshal(data, &keys)
+ if err != nil {
+ return err
+ }
+ err = jsontypes.Unmarshal(keys.PrivKey, &pvKey.PrivKey)
+ if err != nil {
+ return err
+ }
+ err = jsontypes.Unmarshal(keys.PubKey, &pvKey.PubKey)
+ if err != nil {
+ return err
+ }
+ return jsontypes.Unmarshal(keys.ThresholdPublicKey, &pvKey.ThresholdPublicKey)
}
// Validator is a validator interface
@@ -109,7 +170,6 @@ type Validator interface {
}
type PubKey interface {
- HexStringer
Address() Address
Bytes() []byte
VerifySignature(msg []byte, sig []byte) bool
@@ -118,8 +178,11 @@ type PubKey interface {
VerifyAggregateSignature(msgs [][]byte, sig []byte) bool
Equals(PubKey) bool
Type() string
- TypeValue() KeyType
- String() string
+
+ // Implementations must support tagged encoding in JSON.
+ jsontypes.Tagged
+ fmt.Stringer
+ HexStringer
}
type PrivKey interface {
@@ -129,7 +192,9 @@ type PrivKey interface {
PubKey() PubKey
Equals(PrivKey) bool
Type() string
- TypeValue() KeyType
+
+ // Implementations must support tagged encoding in JSON.
+ jsontypes.Tagged
}
type Symmetric interface {
diff --git a/crypto/crypto_test.go b/crypto/crypto_test.go
new file mode 100644
index 0000000000..af89915f13
--- /dev/null
+++ b/crypto/crypto_test.go
@@ -0,0 +1,17 @@
+package crypto
+
+import (
+ "encoding/hex"
+ "testing"
+
+ "github.com/stretchr/testify/require"
+)
+
+func TestChecksum(t *testing.T) {
+ // since sha256 hash algorithm is critical for tenderdash, this test is needed to inform us
+ // if for any reason the hash algorithm is changed
+ actual := Checksum([]byte("dash is the best cryptocurrency in the world"))
+ want, err := hex.DecodeString("FFE75CFE38997723E7C33D0457521B0BA75AB48B39BC467413BDC853ACC7476F")
+ require.NoError(t, err)
+ require.Equal(t, want, actual)
+}
diff --git a/crypto/ed25519/bench_test.go b/crypto/ed25519/bench_test.go
index e57cd393f5..49fcd15041 100644
--- a/crypto/ed25519/bench_test.go
+++ b/crypto/ed25519/bench_test.go
@@ -6,6 +6,7 @@ import (
"testing"
"github.com/stretchr/testify/require"
+
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/internal/benchmarking"
)
diff --git a/crypto/ed25519/ed25519.go b/crypto/ed25519/ed25519.go
index f445808dd3..1b26a18d61 100644
--- a/crypto/ed25519/ed25519.go
+++ b/crypto/ed25519/ed25519.go
@@ -2,6 +2,8 @@ package ed25519
import (
"bytes"
+ "crypto/rand"
+ "crypto/sha256"
"crypto/subtle"
"encoding/hex"
"errors"
@@ -12,8 +14,7 @@ import (
"github.com/oasisprotocol/curve25519-voi/primitives/ed25519/extra/cache"
"github.com/tendermint/tendermint/crypto"
- "github.com/tendermint/tendermint/crypto/tmhash"
- tmjson "github.com/tendermint/tendermint/libs/json"
+ "github.com/tendermint/tendermint/internal/jsontypes"
)
//-------------------------------------
@@ -57,13 +58,16 @@ const (
)
func init() {
- tmjson.RegisterType(PubKey{}, PubKeyName)
- tmjson.RegisterType(PrivKey{}, PrivKeyName)
+ jsontypes.MustRegister(PubKey{})
+ jsontypes.MustRegister(PrivKey{})
}
// PrivKey implements crypto.PrivKey.
type PrivKey []byte
+// TypeTag satisfies the jsontypes.Tagged interface.
+func (PrivKey) TypeTag() string { return PrivKeyName }
+
// Bytes returns the privkey byte format.
func (privKey PrivKey) Bytes() []byte {
return []byte(privKey)
@@ -138,7 +142,7 @@ func (privKey PrivKey) TypeValue() crypto.KeyType {
// It uses OS randomness in conjunction with the current global random seed
// in tendermint/libs/common to generate the private key.
func GenPrivKey() PrivKey {
- return genPrivKey(crypto.CReader())
+ return genPrivKey(rand.Reader)
}
// genPrivKey generates a new ed25519 private key using the provided reader.
@@ -156,9 +160,8 @@ func genPrivKey(rand io.Reader) PrivKey {
// NOTE: secret should be the output of a KDF like bcrypt,
// if it's derived from user input.
func GenPrivKeyFromSecret(secret []byte) PrivKey {
- seed := crypto.Sha256(secret) // Not Ripemd160 because we want 32 bytes.
-
- return PrivKey(ed25519.NewKeyFromSeed(seed))
+ seed := sha256.Sum256(secret)
+ return PrivKey(ed25519.NewKeyFromSeed(seed[:]))
}
//-------------------------------------
@@ -168,12 +171,15 @@ var _ crypto.PubKey = PubKey{}
// PubKeyEd25519 implements crypto.PubKey for the Ed25519 signature scheme.
type PubKey []byte
+// TypeTag satisfies the jsontypes.Tagged interface.
+func (PubKey) TypeTag() string { return PubKeyName }
+
// Address is the SHA256-20 of the raw pubkey bytes.
func (pubKey PubKey) Address() crypto.Address {
if len(pubKey) != PubKeySize {
panic("pubkey is incorrect size")
}
- return crypto.Address(tmhash.SumTruncated(pubKey))
+ return crypto.AddressHash(pubKey)
}
// Bytes returns the PubKey byte format.
@@ -268,5 +274,5 @@ func (b *BatchVerifier) Add(key crypto.PubKey, msg, signature []byte) error {
}
func (b *BatchVerifier) Verify() (bool, []bool) {
- return b.BatchVerifier.Verify(crypto.CReader())
+ return b.BatchVerifier.Verify(rand.Reader)
}
diff --git a/crypto/ed25519/ed25519_test.go b/crypto/ed25519/ed25519_test.go
index e40acd27dc..db8ff81849 100644
--- a/crypto/ed25519/ed25519_test.go
+++ b/crypto/ed25519/ed25519_test.go
@@ -17,7 +17,7 @@ func TestSignAndValidateEd25519(t *testing.T) {
msg := crypto.CRandBytes(128)
sig, err := privKey.SignDigest(msg)
- require.Nil(t, err)
+ require.NoError(t, err)
// Test the signature
assert.True(t, pubKey.VerifySignature(msg, sig))
diff --git a/crypto/encoding/codec.go b/crypto/encoding/codec.go
index 3319d0e5a0..8ca540ecd2 100644
--- a/crypto/encoding/codec.go
+++ b/crypto/encoding/codec.go
@@ -8,15 +8,15 @@ import (
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/ed25519"
"github.com/tendermint/tendermint/crypto/secp256k1"
- "github.com/tendermint/tendermint/libs/json"
+ "github.com/tendermint/tendermint/internal/jsontypes"
cryptoproto "github.com/tendermint/tendermint/proto/tendermint/crypto"
)
func init() {
- json.RegisterType((*cryptoproto.PublicKey)(nil), "tendermint.crypto.PublicKey")
- json.RegisterType((*cryptoproto.PublicKey_Bls12381)(nil), "tendermint.crypto.PublicKey_Bls12381")
- json.RegisterType((*cryptoproto.PublicKey_Ed25519)(nil), "tendermint.crypto.PublicKey_Ed25519")
- json.RegisterType((*cryptoproto.PublicKey_Secp256K1)(nil), "tendermint.crypto.PublicKey_Secp256K1")
+ jsontypes.MustRegister((*cryptoproto.PublicKey)(nil))
+ jsontypes.MustRegister((*cryptoproto.PublicKey_Bls12381)(nil))
+ jsontypes.MustRegister((*cryptoproto.PublicKey_Ed25519)(nil))
+ jsontypes.MustRegister((*cryptoproto.PublicKey_Secp256K1)(nil))
}
// PubKeyToProto takes crypto.PubKey and transforms it to a protobuf Pubkey
diff --git a/crypto/example_test.go b/crypto/example_test.go
deleted file mode 100644
index f1d0013d48..0000000000
--- a/crypto/example_test.go
+++ /dev/null
@@ -1,28 +0,0 @@
-// Copyright 2017 Tendermint. All Rights Reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package crypto_test
-
-import (
- "fmt"
-
- "github.com/tendermint/tendermint/crypto"
-)
-
-func ExampleSha256() {
- sum := crypto.Sha256([]byte("This is Tendermint"))
- fmt.Printf("%x\n", sum)
- // Output:
- // f91afb642f3d1c87c17eb01aae5cb65c242dfdbe7cf1066cc260f4ce5d33b94e
-}
diff --git a/crypto/hash.go b/crypto/hash.go
deleted file mode 100644
index e1d22523f2..0000000000
--- a/crypto/hash.go
+++ /dev/null
@@ -1,11 +0,0 @@
-package crypto
-
-import (
- "crypto/sha256"
-)
-
-func Sha256(bytes []byte) []byte {
- hasher := sha256.New()
- hasher.Write(bytes)
- return hasher.Sum(nil)
-}
diff --git a/crypto/merkle/hash.go b/crypto/merkle/hash.go
index 9c6df1786e..0bb5448d71 100644
--- a/crypto/merkle/hash.go
+++ b/crypto/merkle/hash.go
@@ -3,7 +3,7 @@ package merkle
import (
"hash"
- "github.com/tendermint/tendermint/crypto/tmhash"
+ "github.com/tendermint/tendermint/crypto"
)
// TODO: make these have a large predefined capacity
@@ -14,12 +14,12 @@ var (
// returns tmhash()
func emptyHash() []byte {
- return tmhash.Sum([]byte{})
+ return crypto.Checksum([]byte{})
}
// returns tmhash(0x00 || leaf)
func leafHash(leaf []byte) []byte {
- return tmhash.Sum(append(leafPrefix, leaf...))
+ return crypto.Checksum(append(leafPrefix, leaf...))
}
// returns tmhash(0x00 || leaf)
@@ -36,7 +36,7 @@ func innerHash(left []byte, right []byte) []byte {
n := copy(data, innerPrefix)
n += copy(data[n:], left)
copy(data[n:], right)
- return tmhash.Sum(data)
+ return crypto.Checksum(data)[:]
}
func innerHashOpt(s hash.Hash, left []byte, right []byte) []byte {
diff --git a/crypto/merkle/proof.go b/crypto/merkle/proof.go
index 80b289d231..8b98d1b21b 100644
--- a/crypto/merkle/proof.go
+++ b/crypto/merkle/proof.go
@@ -5,7 +5,7 @@ import (
"errors"
"fmt"
- "github.com/tendermint/tendermint/crypto/tmhash"
+ "github.com/tendermint/tendermint/crypto"
tmcrypto "github.com/tendermint/tendermint/proto/tendermint/crypto"
)
@@ -24,10 +24,10 @@ const (
// everything. This also affects the generalized proof system as
// well.
type Proof struct {
- Total int64 `json:"total"` // Total number of items.
- Index int64 `json:"index"` // Index of item to prove.
- LeafHash []byte `json:"leaf_hash"` // Hash of item value.
- Aunts [][]byte `json:"aunts"` // Hashes from leaf's sibling to a root's child.
+ Total int64 `json:"total,string"` // Total number of items.
+ Index int64 `json:"index,string"` // Index of item to prove.
+ LeafHash []byte `json:"leaf_hash"` // Hash of item value.
+ Aunts [][]byte `json:"aunts"` // Hashes from leaf's sibling to a root's child.
}
// ProofsFromByteSlices computes inclusion proof for given items.
@@ -102,15 +102,15 @@ func (sp *Proof) ValidateBasic() error {
if sp.Index < 0 {
return errors.New("negative Index")
}
- if len(sp.LeafHash) != tmhash.Size {
- return fmt.Errorf("expected LeafHash size to be %d, got %d", tmhash.Size, len(sp.LeafHash))
+ if len(sp.LeafHash) != crypto.HashSize {
+ return fmt.Errorf("expected LeafHash size to be %d, got %d", crypto.HashSize, len(sp.LeafHash))
}
if len(sp.Aunts) > MaxAunts {
return fmt.Errorf("expected no more than %d aunts, got %d", MaxAunts, len(sp.Aunts))
}
for i, auntHash := range sp.Aunts {
- if len(auntHash) != tmhash.Size {
- return fmt.Errorf("expected Aunts#%d size to be %d, got %d", i, tmhash.Size, len(auntHash))
+ if len(auntHash) != crypto.HashSize {
+ return fmt.Errorf("expected Aunts#%d size to be %d, got %d", i, crypto.HashSize, len(auntHash))
}
}
return nil
diff --git a/crypto/merkle/proof_key_path_test.go b/crypto/merkle/proof_key_path_test.go
index 0cc947643f..13d26b3601 100644
--- a/crypto/merkle/proof_key_path_test.go
+++ b/crypto/merkle/proof_key_path_test.go
@@ -28,13 +28,13 @@ func TestKeyPath(t *testing.T) {
case KeyEncodingHex:
rand.Read(keys[i])
default:
- panic("Unexpected encoding")
+ require.Fail(t, "Unexpected encoding")
}
path = path.AppendKey(keys[i], enc)
}
res, err := KeyPathToKeys(path.String())
- require.Nil(t, err)
+ require.NoError(t, err)
require.Equal(t, len(keys), len(res))
for i, key := range keys {
diff --git a/crypto/merkle/proof_test.go b/crypto/merkle/proof_test.go
index f0d2f86896..05a5ca369a 100644
--- a/crypto/merkle/proof_test.go
+++ b/crypto/merkle/proof_test.go
@@ -79,58 +79,58 @@ func TestProofOperators(t *testing.T) {
// Good
popz := ProofOperators([]ProofOperator{op1, op2, op3, op4})
err = popz.Verify(bz("OUTPUT4"), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
- assert.Nil(t, err)
+ assert.NoError(t, err)
err = popz.VerifyValue(bz("OUTPUT4"), "/KEY4/KEY2/KEY1", bz("INPUT1"))
- assert.Nil(t, err)
+ assert.NoError(t, err)
// BAD INPUT
err = popz.Verify(bz("OUTPUT4"), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1_WRONG")})
- assert.NotNil(t, err)
+ assert.Error(t, err)
err = popz.VerifyValue(bz("OUTPUT4"), "/KEY4/KEY2/KEY1", bz("INPUT1_WRONG"))
- assert.NotNil(t, err)
+ assert.Error(t, err)
// BAD KEY 1
err = popz.Verify(bz("OUTPUT4"), "/KEY3/KEY2/KEY1", [][]byte{bz("INPUT1")})
- assert.NotNil(t, err)
+ assert.Error(t, err)
// BAD KEY 2
err = popz.Verify(bz("OUTPUT4"), "KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
- assert.NotNil(t, err)
+ assert.Error(t, err)
// BAD KEY 3
err = popz.Verify(bz("OUTPUT4"), "/KEY4/KEY2/KEY1/", [][]byte{bz("INPUT1")})
- assert.NotNil(t, err)
+ assert.Error(t, err)
// BAD KEY 4
err = popz.Verify(bz("OUTPUT4"), "//KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
- assert.NotNil(t, err)
+ assert.Error(t, err)
// BAD KEY 5
err = popz.Verify(bz("OUTPUT4"), "/KEY2/KEY1", [][]byte{bz("INPUT1")})
- assert.NotNil(t, err)
+ assert.Error(t, err)
// BAD OUTPUT 1
err = popz.Verify(bz("OUTPUT4_WRONG"), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
- assert.NotNil(t, err)
+ assert.Error(t, err)
// BAD OUTPUT 2
err = popz.Verify(bz(""), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
- assert.NotNil(t, err)
+ assert.Error(t, err)
// BAD POPZ 1
popz = []ProofOperator{op1, op2, op4}
err = popz.Verify(bz("OUTPUT4"), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
- assert.NotNil(t, err)
+ assert.Error(t, err)
// BAD POPZ 2
popz = []ProofOperator{op4, op3, op2, op1}
err = popz.Verify(bz("OUTPUT4"), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
- assert.NotNil(t, err)
+ assert.Error(t, err)
// BAD POPZ 3
popz = []ProofOperator{}
err = popz.Verify(bz("OUTPUT4"), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
- assert.NotNil(t, err)
+ assert.Error(t, err)
}
func bz(s string) []byte {
diff --git a/crypto/merkle/proof_value.go b/crypto/merkle/proof_value.go
index ab776216b0..0f4f2eb3dd 100644
--- a/crypto/merkle/proof_value.go
+++ b/crypto/merkle/proof_value.go
@@ -2,9 +2,9 @@ package merkle
import (
"bytes"
+ "crypto/sha256"
"fmt"
- "github.com/tendermint/tendermint/crypto/tmhash"
tmcrypto "github.com/tendermint/tendermint/proto/tendermint/crypto"
)
@@ -79,14 +79,13 @@ func (op ValueOp) Run(args [][]byte) ([][]byte, error) {
return nil, fmt.Errorf("expected 1 arg, got %v", len(args))
}
value := args[0]
- hasher := tmhash.New()
- hasher.Write(value)
- vhash := hasher.Sum(nil)
+
+ vhash := sha256.Sum256(value)
bz := new(bytes.Buffer)
// Wrap to hash the KVPair.
- encodeByteSlice(bz, op.key) // nolint: errcheck // does not error
- encodeByteSlice(bz, vhash) // nolint: errcheck // does not error
+ encodeByteSlice(bz, op.key) //nolint: errcheck // does not error
+ encodeByteSlice(bz, vhash[:]) //nolint: errcheck // does not error
kvhash := leafHash(bz.Bytes())
if !bytes.Equal(kvhash, op.Proof.LeafHash) {
diff --git a/crypto/merkle/rfc6962_test.go b/crypto/merkle/rfc6962_test.go
index 571e5c75f5..f22a48a32e 100644
--- a/crypto/merkle/rfc6962_test.go
+++ b/crypto/merkle/rfc6962_test.go
@@ -20,7 +20,7 @@ import (
"encoding/hex"
"testing"
- "github.com/tendermint/tendermint/crypto/tmhash"
+ "github.com/tendermint/tendermint/crypto"
)
func TestRFC6962Hasher(t *testing.T) {
@@ -39,7 +39,7 @@ func TestRFC6962Hasher(t *testing.T) {
// echo -n '' | sha256sum
{
desc: "RFC6962 Empty Tree",
- want: "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"[:tmhash.Size*2],
+ want: "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"[:crypto.HashSize*2],
got: emptyTreeHash,
},
@@ -47,19 +47,19 @@ func TestRFC6962Hasher(t *testing.T) {
// echo -n 00 | xxd -r -p | sha256sum
{
desc: "RFC6962 Empty Leaf",
- want: "6e340b9cffb37a989ca544e6bb780a2c78901d3fb33738768511a30617afa01d"[:tmhash.Size*2],
+ want: "6e340b9cffb37a989ca544e6bb780a2c78901d3fb33738768511a30617afa01d"[:crypto.HashSize*2],
got: emptyLeafHash,
},
// echo -n 004C313233343536 | xxd -r -p | sha256sum
{
desc: "RFC6962 Leaf",
- want: "395aa064aa4c29f7010acfe3f25db9485bbd4b91897b6ad7ad547639252b4d56"[:tmhash.Size*2],
+ want: "395aa064aa4c29f7010acfe3f25db9485bbd4b91897b6ad7ad547639252b4d56"[:crypto.HashSize*2],
got: leafHash,
},
// echo -n 014E3132334E343536 | xxd -r -p | sha256sum
{
desc: "RFC6962 Node",
- want: "aa217fe888e47007fa15edab33c2b492a722cb106c64667fc2b044444de66bbb"[:tmhash.Size*2],
+ want: "aa217fe888e47007fa15edab33c2b492a722cb106c64667fc2b044444de66bbb"[:crypto.HashSize*2],
got: innerHash([]byte("N123"), []byte("N456")),
},
} {
diff --git a/crypto/merkle/tree_test.go b/crypto/merkle/tree_test.go
index 641c46b76c..72b260178f 100644
--- a/crypto/merkle/tree_test.go
+++ b/crypto/merkle/tree_test.go
@@ -7,7 +7,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
- "github.com/tendermint/tendermint/crypto/tmhash"
+ "github.com/tendermint/tendermint/crypto"
ctest "github.com/tendermint/tendermint/internal/libs/test"
tmrand "github.com/tendermint/tendermint/libs/rand"
)
@@ -53,7 +53,7 @@ func TestProof(t *testing.T) {
items := make([][]byte, total)
for i := 0; i < total; i++ {
- items[i] = testItem(tmrand.Bytes(tmhash.Size))
+ items[i] = testItem(tmrand.Bytes(crypto.HashSize))
}
rootHash = HashFromByteSlices(items)
@@ -106,7 +106,7 @@ func TestHashAlternatives(t *testing.T) {
items := make([][]byte, total)
for i := 0; i < total; i++ {
- items[i] = testItem(tmrand.Bytes(tmhash.Size))
+ items[i] = testItem(tmrand.Bytes(crypto.HashSize))
}
rootHash1 := HashFromByteSlicesIterative(items)
@@ -119,7 +119,7 @@ func BenchmarkHashAlternatives(b *testing.B) {
items := make([][]byte, total)
for i := 0; i < total; i++ {
- items[i] = testItem(tmrand.Bytes(tmhash.Size))
+ items[i] = testItem(tmrand.Bytes(crypto.HashSize))
}
b.ResetTimer()
diff --git a/crypto/random.go b/crypto/random.go
index 275fb1044f..352ea0a3ec 100644
--- a/crypto/random.go
+++ b/crypto/random.go
@@ -1,26 +1,20 @@
package crypto
import (
- crand "crypto/rand"
+ "crypto/rand"
"encoding/hex"
- "io"
)
// This only uses the OS's randomness
-func randBytes(numBytes int) []byte {
+func CRandBytes(numBytes int) []byte {
b := make([]byte, numBytes)
- _, err := crand.Read(b)
+ _, err := rand.Read(b)
if err != nil {
panic(err)
}
return b
}
-// This only uses the OS's randomness
-func CRandBytes(numBytes int) []byte {
- return randBytes(numBytes)
-}
-
// CRandHex returns a hex encoded string that's floor(numDigits/2) * 2 long.
//
// Note: CRandHex(24) gives 96 bits of randomness that
@@ -28,8 +22,3 @@ func CRandBytes(numBytes int) []byte {
func CRandHex(numDigits int) string {
return hex.EncodeToString(CRandBytes(numDigits / 2))
}
-
-// Returns a crand.Reader.
-func CReader() io.Reader {
- return crand.Reader
-}
diff --git a/crypto/secp256k1/secp256k1.go b/crypto/secp256k1/secp256k1.go
index 52aee0d5d5..c520360b3c 100644
--- a/crypto/secp256k1/secp256k1.go
+++ b/crypto/secp256k1/secp256k1.go
@@ -2,6 +2,7 @@ package secp256k1
import (
"bytes"
+ "crypto/rand"
"crypto/sha256"
"crypto/subtle"
"encoding/hex"
@@ -13,10 +14,10 @@ import (
secp256k1 "github.com/btcsuite/btcd/btcec"
"github.com/tendermint/tendermint/crypto"
- tmjson "github.com/tendermint/tendermint/libs/json"
+ "github.com/tendermint/tendermint/internal/jsontypes"
// necessary for Bitcoin address format
- "golang.org/x/crypto/ripemd160" // nolint
+ "golang.org/x/crypto/ripemd160" //nolint:staticcheck
)
//-------------------------------------
@@ -29,8 +30,8 @@ const (
)
func init() {
- tmjson.RegisterType(PubKey{}, PubKeyName)
- tmjson.RegisterType(PrivKey{}, PrivKeyName)
+ jsontypes.MustRegister(PubKey{})
+ jsontypes.MustRegister(PrivKey{})
}
var _ crypto.PrivKey = PrivKey{}
@@ -38,6 +39,9 @@ var _ crypto.PrivKey = PrivKey{}
// PrivKey implements PrivKey.
type PrivKey []byte
+// TypeTag satisfies the jsontypes.Tagged interface.
+func (PrivKey) TypeTag() string { return PrivKeyName }
+
// Bytes marshalls the private key using amino encoding.
func (privKey PrivKey) Bytes() []byte {
return []byte(privKey)
@@ -73,7 +77,7 @@ func (privKey PrivKey) TypeValue() crypto.KeyType {
// GenPrivKey generates a new ECDSA private key on curve secp256k1 private key.
// It uses OS randomness to generate the private key.
func GenPrivKey() PrivKey {
- return genPrivKey(crypto.CReader())
+ return genPrivKey(rand.Reader)
}
// genPrivKey generates a new secp256k1 private key using the provided reader.
@@ -145,6 +149,9 @@ const PubKeySize = 33
// This prefix is followed with the x-coordinate.
type PubKey []byte
+// TypeTag satisfies the jsontypes.Tagged interface.
+func (PubKey) TypeTag() string { return PubKeyName }
+
// Address returns a Bitcoin style addresses: RIPEMD160(SHA256(pubkey))
func (pubKey PubKey) Address() crypto.Address {
if len(pubKey) != PubKeySize {
@@ -199,8 +206,8 @@ var secp256k1halfN = new(big.Int).Rsh(secp256k1.S256().N, 1)
// The returned signature will be of the form R || S (in lower-S form).
func (privKey PrivKey) Sign(msg []byte) ([]byte, error) {
priv, _ := secp256k1.PrivKeyFromBytes(secp256k1.S256(), privKey)
-
- sig, err := priv.Sign(crypto.Sha256(msg))
+ seed := sha256.Sum256(msg)
+ sig, err := priv.Sign(seed[:])
if err != nil {
return nil, err
}
@@ -229,28 +236,8 @@ func (pubKey PubKey) VerifySignature(msg []byte, sigStr []byte) bool {
return false
}
- return signature.Verify(crypto.Sha256(msg), pub)
-}
-
-// Read Signature struct from R || S. Caller needs to ensure
-// that len(sigStr) == 64.
-func signatureFromBytes(sigStr []byte) *secp256k1.Signature {
- return &secp256k1.Signature{
- R: new(big.Int).SetBytes(sigStr[:32]),
- S: new(big.Int).SetBytes(sigStr[32:64]),
- }
-}
-
-// Serialize signature to R || S.
-// R, S are padded to 32 bytes respectively.
-func serializeSig(sig *secp256k1.Signature) []byte {
- rBytes := sig.R.Bytes()
- sBytes := sig.S.Bytes()
- sigBytes := make([]byte, 64)
- // 0 pad the byte arrays from the left if they aren't big enough.
- copy(sigBytes[32-len(rBytes):32], rBytes)
- copy(sigBytes[64-len(sBytes):64], sBytes)
- return sigBytes
+ seed := sha256.Sum256(msg)
+ return signature.Verify(seed[:], pub)
}
// SignDigest creates an ECDSA signature on curve Secp256k1.
@@ -278,3 +265,24 @@ func (pubKey PubKey) VerifyAggregateSignature(messages [][]byte, sig []byte) boo
func (pubKey PubKey) VerifySignatureDigest(hash []byte, sig []byte) bool {
return false
}
+
+// Read Signature struct from R || S. Caller needs to ensure
+// that len(sigStr) == 64.
+func signatureFromBytes(sigStr []byte) *secp256k1.Signature {
+ return &secp256k1.Signature{
+ R: new(big.Int).SetBytes(sigStr[:32]),
+ S: new(big.Int).SetBytes(sigStr[32:64]),
+ }
+}
+
+// Serialize signature to R || S.
+// R, S are padded to 32 bytes respectively.
+func serializeSig(sig *secp256k1.Signature) []byte {
+ rBytes := sig.R.Bytes()
+ sBytes := sig.S.Bytes()
+ sigBytes := make([]byte, 64)
+ // 0 pad the byte arrays from the left if they aren't big enough.
+ copy(sigBytes[32-len(rBytes):32], rBytes)
+ copy(sigBytes[64-len(sBytes):64], sBytes)
+ return sigBytes
+}
diff --git a/crypto/secp256k1/secp256k1_test.go b/crypto/secp256k1/secp256k1_test.go
index 7a11092939..6cd53704c5 100644
--- a/crypto/secp256k1/secp256k1_test.go
+++ b/crypto/secp256k1/secp256k1_test.go
@@ -52,7 +52,7 @@ func TestSignAndValidateSecp256k1(t *testing.T) {
msg := crypto.CRandBytes(128)
sig, err := privKey.Sign(msg)
- require.Nil(t, err)
+ require.NoError(t, err)
assert.True(t, pubKey.VerifySignature(msg, sig))
diff --git a/crypto/tmhash/hash.go b/crypto/tmhash/hash.go
deleted file mode 100644
index f9b9582420..0000000000
--- a/crypto/tmhash/hash.go
+++ /dev/null
@@ -1,65 +0,0 @@
-package tmhash
-
-import (
- "crypto/sha256"
- "hash"
-)
-
-const (
- Size = sha256.Size
- BlockSize = sha256.BlockSize
-)
-
-// New returns a new hash.Hash.
-func New() hash.Hash {
- return sha256.New()
-}
-
-// Sum returns the SHA256 of the bz.
-func Sum(bz []byte) []byte {
- h := sha256.Sum256(bz)
- return h[:]
-}
-
-//-------------------------------------------------------------
-
-const (
- TruncatedSize = 20
-)
-
-type sha256trunc struct {
- sha256 hash.Hash
-}
-
-func (h sha256trunc) Write(p []byte) (n int, err error) {
- return h.sha256.Write(p)
-}
-func (h sha256trunc) Sum(b []byte) []byte {
- shasum := h.sha256.Sum(b)
- return shasum[:TruncatedSize]
-}
-
-func (h sha256trunc) Reset() {
- h.sha256.Reset()
-}
-
-func (h sha256trunc) Size() int {
- return TruncatedSize
-}
-
-func (h sha256trunc) BlockSize() int {
- return h.sha256.BlockSize()
-}
-
-// NewTruncated returns a new hash.Hash.
-func NewTruncated() hash.Hash {
- return sha256trunc{
- sha256: sha256.New(),
- }
-}
-
-// SumTruncated returns the first 20 bytes of SHA256 of the bz.
-func SumTruncated(bz []byte) []byte {
- hash := sha256.Sum256(bz)
- return hash[:TruncatedSize]
-}
diff --git a/crypto/tmhash/hash_test.go b/crypto/tmhash/hash_test.go
deleted file mode 100644
index cf9991b3b2..0000000000
--- a/crypto/tmhash/hash_test.go
+++ /dev/null
@@ -1,48 +0,0 @@
-package tmhash_test
-
-import (
- "crypto/sha256"
- "testing"
-
- "github.com/stretchr/testify/assert"
- "github.com/stretchr/testify/require"
-
- "github.com/tendermint/tendermint/crypto/tmhash"
-)
-
-func TestHash(t *testing.T) {
- testVector := []byte("abc")
- hasher := tmhash.New()
- _, err := hasher.Write(testVector)
- require.NoError(t, err)
- bz := hasher.Sum(nil)
-
- bz2 := tmhash.Sum(testVector)
-
- hasher = sha256.New()
- _, err = hasher.Write(testVector)
- require.NoError(t, err)
- bz3 := hasher.Sum(nil)
-
- assert.Equal(t, bz, bz2)
- assert.Equal(t, bz, bz3)
-}
-
-func TestHashTruncated(t *testing.T) {
- testVector := []byte("abc")
- hasher := tmhash.NewTruncated()
- _, err := hasher.Write(testVector)
- require.NoError(t, err)
- bz := hasher.Sum(nil)
-
- bz2 := tmhash.SumTruncated(testVector)
-
- hasher = sha256.New()
- _, err = hasher.Write(testVector)
- require.NoError(t, err)
- bz3 := hasher.Sum(nil)
- bz3 = bz3[:tmhash.TruncatedSize]
-
- assert.Equal(t, bz, bz2)
- assert.Equal(t, bz, bz3)
-}
diff --git a/crypto/version.go b/crypto/version.go
deleted file mode 100644
index 77c0bed8a2..0000000000
--- a/crypto/version.go
+++ /dev/null
@@ -1,3 +0,0 @@
-package crypto
-
-const Version = "0.9.0-dev"
diff --git a/crypto/xchacha20poly1305/vector_test.go b/crypto/xchacha20poly1305/vector_test.go
deleted file mode 100644
index c6ca9d8d23..0000000000
--- a/crypto/xchacha20poly1305/vector_test.go
+++ /dev/null
@@ -1,122 +0,0 @@
-package xchacha20poly1305
-
-import (
- "bytes"
- "encoding/hex"
- "testing"
-)
-
-func toHex(bits []byte) string {
- return hex.EncodeToString(bits)
-}
-
-func fromHex(bits string) []byte {
- b, err := hex.DecodeString(bits)
- if err != nil {
- panic(err)
- }
- return b
-}
-
-func TestHChaCha20(t *testing.T) {
- for i, v := range hChaCha20Vectors {
- var key [32]byte
- var nonce [16]byte
- copy(key[:], v.key)
- copy(nonce[:], v.nonce)
-
- HChaCha20(&key, &nonce, &key)
- if !bytes.Equal(key[:], v.keystream) {
- t.Errorf("test %d: keystream mismatch:\n \t got: %s\n \t want: %s", i, toHex(key[:]), toHex(v.keystream))
- }
- }
-}
-
-var hChaCha20Vectors = []struct {
- key, nonce, keystream []byte
-}{
- {
- fromHex("0000000000000000000000000000000000000000000000000000000000000000"),
- fromHex("000000000000000000000000000000000000000000000000"),
- fromHex("1140704c328d1d5d0e30086cdf209dbd6a43b8f41518a11cc387b669b2ee6586"),
- },
- {
- fromHex("8000000000000000000000000000000000000000000000000000000000000000"),
- fromHex("000000000000000000000000000000000000000000000000"),
- fromHex("7d266a7fd808cae4c02a0a70dcbfbcc250dae65ce3eae7fc210f54cc8f77df86"),
- },
- {
- fromHex("0000000000000000000000000000000000000000000000000000000000000001"),
- fromHex("000000000000000000000000000000000000000000000002"),
- fromHex("e0c77ff931bb9163a5460c02ac281c2b53d792b1c43fea817e9ad275ae546963"),
- },
- {
- fromHex("000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f"),
- fromHex("000102030405060708090a0b0c0d0e0f1011121314151617"),
- fromHex("51e3ff45a895675c4b33b46c64f4a9ace110d34df6a2ceab486372bacbd3eff6"),
- },
- {
- fromHex("24f11cce8a1b3d61e441561a696c1c1b7e173d084fd4812425435a8896a013dc"),
- fromHex("d9660c5900ae19ddad28d6e06e45fe5e"),
- fromHex("5966b3eec3bff1189f831f06afe4d4e3be97fa9235ec8c20d08acfbbb4e851e3"),
- },
-}
-
-func TestVectors(t *testing.T) {
- for i, v := range vectors {
- if len(v.plaintext) == 0 {
- v.plaintext = make([]byte, len(v.ciphertext))
- }
-
- var nonce [24]byte
- copy(nonce[:], v.nonce)
-
- aead, err := New(v.key)
- if err != nil {
- t.Error(err)
- }
-
- dst := aead.Seal(nil, nonce[:], v.plaintext, v.ad)
- if !bytes.Equal(dst, v.ciphertext) {
- t.Errorf("test %d: ciphertext mismatch:\n \t got: %s\n \t want: %s", i, toHex(dst), toHex(v.ciphertext))
- }
- open, err := aead.Open(nil, nonce[:], dst, v.ad)
- if err != nil {
- t.Error(err)
- }
- if !bytes.Equal(open, v.plaintext) {
- t.Errorf("test %d: plaintext mismatch:\n \t got: %s\n \t want: %s", i, string(open), string(v.plaintext))
- }
- }
-}
-
-var vectors = []struct {
- key, nonce, ad, plaintext, ciphertext []byte
-}{
- {
- []byte{
- 0x80, 0x81, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87, 0x88, 0x89, 0x8a,
- 0x8b, 0x8c, 0x8d, 0x8e, 0x8f, 0x90, 0x91, 0x92, 0x93, 0x94, 0x95,
- 0x96, 0x97, 0x98, 0x99, 0x9a, 0x9b, 0x9c, 0x9d, 0x9e, 0x9f,
- },
- []byte{0x07, 0x00, 0x00, 0x00, 0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48, 0x49, 0x4a, 0x4b},
- []byte{0x50, 0x51, 0x52, 0x53, 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7},
- []byte(
- "Ladies and Gentlemen of the class of '99: If I could offer you only one tip for the future, sunscreen would be it.",
- ),
- []byte{
- 0x45, 0x3c, 0x06, 0x93, 0xa7, 0x40, 0x7f, 0x04, 0xff, 0x4c, 0x56,
- 0xae, 0xdb, 0x17, 0xa3, 0xc0, 0xa1, 0xaf, 0xff, 0x01, 0x17, 0x49,
- 0x30, 0xfc, 0x22, 0x28, 0x7c, 0x33, 0xdb, 0xcf, 0x0a, 0xc8, 0xb8,
- 0x9a, 0xd9, 0x29, 0x53, 0x0a, 0x1b, 0xb3, 0xab, 0x5e, 0x69, 0xf2,
- 0x4c, 0x7f, 0x60, 0x70, 0xc8, 0xf8, 0x40, 0xc9, 0xab, 0xb4, 0xf6,
- 0x9f, 0xbf, 0xc8, 0xa7, 0xff, 0x51, 0x26, 0xfa, 0xee, 0xbb, 0xb5,
- 0x58, 0x05, 0xee, 0x9c, 0x1c, 0xf2, 0xce, 0x5a, 0x57, 0x26, 0x32,
- 0x87, 0xae, 0xc5, 0x78, 0x0f, 0x04, 0xec, 0x32, 0x4c, 0x35, 0x14,
- 0x12, 0x2c, 0xfc, 0x32, 0x31, 0xfc, 0x1a, 0x8b, 0x71, 0x8a, 0x62,
- 0x86, 0x37, 0x30, 0xa2, 0x70, 0x2b, 0xb7, 0x63, 0x66, 0x11, 0x6b,
- 0xed, 0x09, 0xe0, 0xfd, 0x5c, 0x6d, 0x84, 0xb6, 0xb0, 0xc1, 0xab,
- 0xaf, 0x24, 0x9d, 0x5d, 0xd0, 0xf7, 0xf5, 0xa7, 0xea,
- },
- },
-}
diff --git a/crypto/xchacha20poly1305/xchachapoly.go b/crypto/xchacha20poly1305/xchachapoly.go
deleted file mode 100644
index 2578520a5a..0000000000
--- a/crypto/xchacha20poly1305/xchachapoly.go
+++ /dev/null
@@ -1,259 +0,0 @@
-// Package xchacha20poly1305 creates an AEAD using hchacha, chacha, and poly1305
-// This allows for randomized nonces to be used in conjunction with chacha.
-package xchacha20poly1305
-
-import (
- "crypto/cipher"
- "encoding/binary"
- "errors"
- "fmt"
-
- "golang.org/x/crypto/chacha20poly1305"
-)
-
-// Implements crypto.AEAD
-type xchacha20poly1305 struct {
- key [KeySize]byte
-}
-
-const (
- // KeySize is the size of the key used by this AEAD, in bytes.
- KeySize = 32
- // NonceSize is the size of the nonce used with this AEAD, in bytes.
- NonceSize = 24
- // TagSize is the size added from poly1305
- TagSize = 16
- // MaxPlaintextSize is the max size that can be passed into a single call of Seal
- MaxPlaintextSize = (1 << 38) - 64
- // MaxCiphertextSize is the max size that can be passed into a single call of Open,
- // this differs from plaintext size due to the tag
- MaxCiphertextSize = (1 << 38) - 48
-
- // sigma are constants used in xchacha.
- // Unrolled from a slice so that they can be inlined, as slices can't be constants.
- sigma0 = uint32(0x61707865)
- sigma1 = uint32(0x3320646e)
- sigma2 = uint32(0x79622d32)
- sigma3 = uint32(0x6b206574)
-)
-
-// New returns a new xchachapoly1305 AEAD
-func New(key []byte) (cipher.AEAD, error) {
- if len(key) != KeySize {
- return nil, errors.New("xchacha20poly1305: bad key length")
- }
- ret := new(xchacha20poly1305)
- copy(ret.key[:], key)
- return ret, nil
-}
-
-func (c *xchacha20poly1305) NonceSize() int {
- return NonceSize
-}
-
-func (c *xchacha20poly1305) Overhead() int {
- return TagSize
-}
-
-func (c *xchacha20poly1305) Seal(dst, nonce, plaintext, additionalData []byte) []byte {
- if len(nonce) != NonceSize {
- panic("xchacha20poly1305: bad nonce length passed to Seal")
- }
-
- if uint64(len(plaintext)) > MaxPlaintextSize {
- panic("xchacha20poly1305: plaintext too large")
- }
-
- var subKey [KeySize]byte
- var hNonce [16]byte
- var subNonce [chacha20poly1305.NonceSize]byte
- copy(hNonce[:], nonce[:16])
-
- HChaCha20(&subKey, &hNonce, &c.key)
-
- // This can't error because we always provide a correctly sized key
- chacha20poly1305, _ := chacha20poly1305.New(subKey[:])
-
- copy(subNonce[4:], nonce[16:])
-
- return chacha20poly1305.Seal(dst, subNonce[:], plaintext, additionalData)
-}
-
-func (c *xchacha20poly1305) Open(dst, nonce, ciphertext, additionalData []byte) ([]byte, error) {
- if len(nonce) != NonceSize {
- return nil, fmt.Errorf("xchacha20poly1305: bad nonce length passed to Open")
- }
- if uint64(len(ciphertext)) > MaxCiphertextSize {
- return nil, fmt.Errorf("xchacha20poly1305: ciphertext too large")
- }
- var subKey [KeySize]byte
- var hNonce [16]byte
- var subNonce [chacha20poly1305.NonceSize]byte
- copy(hNonce[:], nonce[:16])
-
- HChaCha20(&subKey, &hNonce, &c.key)
-
- // This can't error because we always provide a correctly sized key
- chacha20poly1305, _ := chacha20poly1305.New(subKey[:])
-
- copy(subNonce[4:], nonce[16:])
-
- return chacha20poly1305.Open(dst, subNonce[:], ciphertext, additionalData)
-}
-
-// HChaCha exported from
-// https://github.com/aead/chacha20/blob/8b13a72661dae6e9e5dea04f344f0dc95ea29547/chacha/chacha_generic.go#L194
-// TODO: Add support for the different assembly instructions used there.
-
-// The MIT License (MIT)
-
-// Copyright (c) 2016 Andreas Auernhammer
-
-// Permission is hereby granted, free of charge, to any person obtaining a copy
-// of this software and associated documentation files (the "Software"), to deal
-// in the Software without restriction, including without limitation the rights
-// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-// copies of the Software, and to permit persons to whom the Software is
-// furnished to do so, subject to the following conditions:
-
-// The above copyright notice and this permission notice shall be included in all
-// copies or substantial portions of the Software.
-
-// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-// SOFTWARE.
-
-// HChaCha20 generates 32 pseudo-random bytes from a 128 bit nonce and a 256 bit secret key.
-// It can be used as a key-derivation-function (KDF).
-func HChaCha20(out *[32]byte, nonce *[16]byte, key *[32]byte) { hChaCha20Generic(out, nonce, key) }
-
-func hChaCha20Generic(out *[32]byte, nonce *[16]byte, key *[32]byte) {
- v00 := sigma0
- v01 := sigma1
- v02 := sigma2
- v03 := sigma3
- v04 := binary.LittleEndian.Uint32(key[0:])
- v05 := binary.LittleEndian.Uint32(key[4:])
- v06 := binary.LittleEndian.Uint32(key[8:])
- v07 := binary.LittleEndian.Uint32(key[12:])
- v08 := binary.LittleEndian.Uint32(key[16:])
- v09 := binary.LittleEndian.Uint32(key[20:])
- v10 := binary.LittleEndian.Uint32(key[24:])
- v11 := binary.LittleEndian.Uint32(key[28:])
- v12 := binary.LittleEndian.Uint32(nonce[0:])
- v13 := binary.LittleEndian.Uint32(nonce[4:])
- v14 := binary.LittleEndian.Uint32(nonce[8:])
- v15 := binary.LittleEndian.Uint32(nonce[12:])
-
- for i := 0; i < 20; i += 2 {
- v00 += v04
- v12 ^= v00
- v12 = (v12 << 16) | (v12 >> 16)
- v08 += v12
- v04 ^= v08
- v04 = (v04 << 12) | (v04 >> 20)
- v00 += v04
- v12 ^= v00
- v12 = (v12 << 8) | (v12 >> 24)
- v08 += v12
- v04 ^= v08
- v04 = (v04 << 7) | (v04 >> 25)
- v01 += v05
- v13 ^= v01
- v13 = (v13 << 16) | (v13 >> 16)
- v09 += v13
- v05 ^= v09
- v05 = (v05 << 12) | (v05 >> 20)
- v01 += v05
- v13 ^= v01
- v13 = (v13 << 8) | (v13 >> 24)
- v09 += v13
- v05 ^= v09
- v05 = (v05 << 7) | (v05 >> 25)
- v02 += v06
- v14 ^= v02
- v14 = (v14 << 16) | (v14 >> 16)
- v10 += v14
- v06 ^= v10
- v06 = (v06 << 12) | (v06 >> 20)
- v02 += v06
- v14 ^= v02
- v14 = (v14 << 8) | (v14 >> 24)
- v10 += v14
- v06 ^= v10
- v06 = (v06 << 7) | (v06 >> 25)
- v03 += v07
- v15 ^= v03
- v15 = (v15 << 16) | (v15 >> 16)
- v11 += v15
- v07 ^= v11
- v07 = (v07 << 12) | (v07 >> 20)
- v03 += v07
- v15 ^= v03
- v15 = (v15 << 8) | (v15 >> 24)
- v11 += v15
- v07 ^= v11
- v07 = (v07 << 7) | (v07 >> 25)
- v00 += v05
- v15 ^= v00
- v15 = (v15 << 16) | (v15 >> 16)
- v10 += v15
- v05 ^= v10
- v05 = (v05 << 12) | (v05 >> 20)
- v00 += v05
- v15 ^= v00
- v15 = (v15 << 8) | (v15 >> 24)
- v10 += v15
- v05 ^= v10
- v05 = (v05 << 7) | (v05 >> 25)
- v01 += v06
- v12 ^= v01
- v12 = (v12 << 16) | (v12 >> 16)
- v11 += v12
- v06 ^= v11
- v06 = (v06 << 12) | (v06 >> 20)
- v01 += v06
- v12 ^= v01
- v12 = (v12 << 8) | (v12 >> 24)
- v11 += v12
- v06 ^= v11
- v06 = (v06 << 7) | (v06 >> 25)
- v02 += v07
- v13 ^= v02
- v13 = (v13 << 16) | (v13 >> 16)
- v08 += v13
- v07 ^= v08
- v07 = (v07 << 12) | (v07 >> 20)
- v02 += v07
- v13 ^= v02
- v13 = (v13 << 8) | (v13 >> 24)
- v08 += v13
- v07 ^= v08
- v07 = (v07 << 7) | (v07 >> 25)
- v03 += v04
- v14 ^= v03
- v14 = (v14 << 16) | (v14 >> 16)
- v09 += v14
- v04 ^= v09
- v04 = (v04 << 12) | (v04 >> 20)
- v03 += v04
- v14 ^= v03
- v14 = (v14 << 8) | (v14 >> 24)
- v09 += v14
- v04 ^= v09
- v04 = (v04 << 7) | (v04 >> 25)
- }
-
- binary.LittleEndian.PutUint32(out[0:], v00)
- binary.LittleEndian.PutUint32(out[4:], v01)
- binary.LittleEndian.PutUint32(out[8:], v02)
- binary.LittleEndian.PutUint32(out[12:], v03)
- binary.LittleEndian.PutUint32(out[16:], v12)
- binary.LittleEndian.PutUint32(out[20:], v13)
- binary.LittleEndian.PutUint32(out[24:], v14)
- binary.LittleEndian.PutUint32(out[28:], v15)
-}
diff --git a/crypto/xchacha20poly1305/xchachapoly_test.go b/crypto/xchacha20poly1305/xchachapoly_test.go
deleted file mode 100644
index 6e42e50ace..0000000000
--- a/crypto/xchacha20poly1305/xchachapoly_test.go
+++ /dev/null
@@ -1,113 +0,0 @@
-package xchacha20poly1305
-
-import (
- "bytes"
- crand "crypto/rand"
- mrand "math/rand"
- "testing"
-)
-
-// The following test is taken from
-// https://github.com/golang/crypto/blob/master/chacha20poly1305/chacha20poly1305_test.go#L69
-// It requires the below copyright notice, where "this source code" refers to the following function.
-// Copyright 2016 The Go Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found at the bottom of this file.
-func TestRandom(t *testing.T) {
- // Some random tests to verify Open(Seal) == Plaintext
- for i := 0; i < 256; i++ {
- var nonce [24]byte
- var key [32]byte
-
- al := mrand.Intn(128)
- pl := mrand.Intn(16384)
- ad := make([]byte, al)
- plaintext := make([]byte, pl)
- _, err := crand.Read(key[:])
- if err != nil {
- t.Errorf("error on read: %s", err)
- }
- _, err = crand.Read(nonce[:])
- if err != nil {
- t.Errorf("error on read: %s", err)
- }
- _, err = crand.Read(ad)
- if err != nil {
- t.Errorf("error on read: %s", err)
- }
- _, err = crand.Read(plaintext)
- if err != nil {
- t.Errorf("error on read: %s", err)
- }
-
- aead, err := New(key[:])
- if err != nil {
- t.Fatal(err)
- }
-
- ct := aead.Seal(nil, nonce[:], plaintext, ad)
-
- plaintext2, err := aead.Open(nil, nonce[:], ct, ad)
- if err != nil {
- t.Errorf("random #%d: Open failed", i)
- continue
- }
-
- if !bytes.Equal(plaintext, plaintext2) {
- t.Errorf("random #%d: plaintext's don't match: got %x vs %x", i, plaintext2, plaintext)
- continue
- }
-
- if len(ad) > 0 {
- alterAdIdx := mrand.Intn(len(ad))
- ad[alterAdIdx] ^= 0x80
- if _, err := aead.Open(nil, nonce[:], ct, ad); err == nil {
- t.Errorf("random #%d: Open was successful after altering additional data", i)
- }
- ad[alterAdIdx] ^= 0x80
- }
-
- alterNonceIdx := mrand.Intn(aead.NonceSize())
- nonce[alterNonceIdx] ^= 0x80
- if _, err := aead.Open(nil, nonce[:], ct, ad); err == nil {
- t.Errorf("random #%d: Open was successful after altering nonce", i)
- }
- nonce[alterNonceIdx] ^= 0x80
-
- alterCtIdx := mrand.Intn(len(ct))
- ct[alterCtIdx] ^= 0x80
- if _, err := aead.Open(nil, nonce[:], ct, ad); err == nil {
- t.Errorf("random #%d: Open was successful after altering ciphertext", i)
- }
- ct[alterCtIdx] ^= 0x80
- }
-}
-
-// AFOREMENTIONED LICENSE
-// Copyright (c) 2009 The Go Authors. All rights reserved.
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are
-// met:
-//
-// * Redistributions of source code must retain the above copyright
-// notice, this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above
-// copyright notice, this list of conditions and the following disclaimer
-// in the documentation and/or other materials provided with the
-// distribution.
-// * Neither the name of Google Inc. nor the names of its
-// contributors may be used to endorse or promote products derived from
-// this software without specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/crypto/xsalsa20symmetric/symmetric.go b/crypto/xsalsa20symmetric/symmetric.go
deleted file mode 100644
index 74cb4b1033..0000000000
--- a/crypto/xsalsa20symmetric/symmetric.go
+++ /dev/null
@@ -1,54 +0,0 @@
-package xsalsa20symmetric
-
-import (
- "errors"
- "fmt"
-
- "golang.org/x/crypto/nacl/secretbox"
-
- "github.com/tendermint/tendermint/crypto"
-)
-
-// TODO, make this into a struct that implements crypto.Symmetric.
-
-const nonceLen = 24
-const secretLen = 32
-
-// secret must be 32 bytes long. Use something like Sha256(Bcrypt(passphrase))
-// The ciphertext is (secretbox.Overhead + 24) bytes longer than the plaintext.
-func EncryptSymmetric(plaintext []byte, secret []byte) (ciphertext []byte) {
- if len(secret) != secretLen {
- panic(fmt.Sprintf("Secret must be 32 bytes long, got len %v", len(secret)))
- }
- nonce := crypto.CRandBytes(nonceLen)
- nonceArr := [nonceLen]byte{}
- copy(nonceArr[:], nonce)
- secretArr := [secretLen]byte{}
- copy(secretArr[:], secret)
- ciphertext = make([]byte, nonceLen+secretbox.Overhead+len(plaintext))
- copy(ciphertext, nonce)
- secretbox.Seal(ciphertext[nonceLen:nonceLen], plaintext, &nonceArr, &secretArr)
- return ciphertext
-}
-
-// secret must be 32 bytes long. Use something like Sha256(Bcrypt(passphrase))
-// The ciphertext is (secretbox.Overhead + 24) bytes longer than the plaintext.
-func DecryptSymmetric(ciphertext []byte, secret []byte) (plaintext []byte, err error) {
- if len(secret) != secretLen {
- panic(fmt.Sprintf("Secret must be 32 bytes long, got len %v", len(secret)))
- }
- if len(ciphertext) <= secretbox.Overhead+nonceLen {
- return nil, errors.New("ciphertext is too short")
- }
- nonce := ciphertext[:nonceLen]
- nonceArr := [nonceLen]byte{}
- copy(nonceArr[:], nonce)
- secretArr := [secretLen]byte{}
- copy(secretArr[:], secret)
- plaintext = make([]byte, len(ciphertext)-nonceLen-secretbox.Overhead)
- _, ok := secretbox.Open(plaintext[:0], ciphertext[nonceLen:], &nonceArr, &secretArr)
- if !ok {
- return nil, errors.New("ciphertext decryption failed")
- }
- return plaintext, nil
-}
diff --git a/crypto/xsalsa20symmetric/symmetric_test.go b/crypto/xsalsa20symmetric/symmetric_test.go
deleted file mode 100644
index 160d49a9ef..0000000000
--- a/crypto/xsalsa20symmetric/symmetric_test.go
+++ /dev/null
@@ -1,40 +0,0 @@
-package xsalsa20symmetric
-
-import (
- "testing"
-
- "github.com/stretchr/testify/assert"
- "github.com/stretchr/testify/require"
-
- "golang.org/x/crypto/bcrypt"
-
- "github.com/tendermint/tendermint/crypto"
-)
-
-func TestSimple(t *testing.T) {
-
- plaintext := []byte("sometext")
- secret := []byte("somesecretoflengththirtytwo===32")
- ciphertext := EncryptSymmetric(plaintext, secret)
- plaintext2, err := DecryptSymmetric(ciphertext, secret)
-
- require.Nil(t, err, "%+v", err)
- assert.Equal(t, plaintext, plaintext2)
-}
-
-func TestSimpleWithKDF(t *testing.T) {
-
- plaintext := []byte("sometext")
- secretPass := []byte("somesecret")
- secret, err := bcrypt.GenerateFromPassword(secretPass, 12)
- if err != nil {
- t.Error(err)
- }
- secret = crypto.Sha256(secret)
-
- ciphertext := EncryptSymmetric(plaintext, secret)
- plaintext2, err := DecryptSymmetric(ciphertext, secret)
-
- require.Nil(t, err, "%+v", err)
- assert.Equal(t, plaintext, plaintext2)
-}
diff --git a/dashcore/rpc/client.go b/dash/core/client.go
similarity index 92%
rename from dashcore/rpc/client.go
rename to dash/core/client.go
index 3f3546fd9e..ae514f50da 100644
--- a/dashcore/rpc/client.go
+++ b/dash/core/client.go
@@ -1,4 +1,4 @@
-package dashcore
+package core
import (
"fmt"
@@ -13,7 +13,22 @@ import (
const ModuleName = "rpcclient"
+// QuorumVerifier represents subset of priv validator features that
+// allows verification of threshold signatures.
+type QuorumVerifier interface {
+ // QuorumVerify verifies quorum signature
+ QuorumVerify(
+ quorumType btcjson.LLMQType,
+ requestID bytes.HexBytes,
+ messageHash bytes.HexBytes,
+ signature bytes.HexBytes,
+ quorumHash bytes.HexBytes,
+ ) (bool, error)
+}
+
type Client interface {
+ QuorumVerifier
+
// QuorumInfo returns quorum info
QuorumInfo(quorumType btcjson.LLMQType, quorumHash crypto.QuorumHash) (*btcjson.QuorumInfoResult, error)
// MasternodeStatus returns masternode status
@@ -29,7 +44,6 @@ type Client interface {
messageHash bytes.HexBytes,
quorumHash bytes.HexBytes,
) (*btcjson.QuorumSignResult, error)
- // QuorumVerify verifies quorum signature
QuorumVerify(
quorumType btcjson.LLMQType,
requestID bytes.HexBytes,
diff --git a/dashcore/rpc/mock.go b/dash/core/mock.go
similarity index 92%
rename from dashcore/rpc/mock.go
rename to dash/core/mock.go
index 213d24a4ea..49063527b0 100644
--- a/dashcore/rpc/mock.go
+++ b/dash/core/mock.go
@@ -1,4 +1,4 @@
-package dashcore
+package core
import (
"context"
@@ -47,12 +47,13 @@ func (mc *MockClient) QuorumInfo(
quorumType btcjson.LLMQType,
quorumHash crypto.QuorumHash,
) (*btcjson.QuorumInfoResult, error) {
+ ctx := context.Background()
var members []btcjson.QuorumMember
- proTxHash, err := mc.localPV.GetProTxHash(context.Background())
+ proTxHash, err := mc.localPV.GetProTxHash(ctx)
if err != nil {
panic(err)
}
- pk, err := mc.localPV.GetPubKey(context.Background(), quorumHash)
+ pk, err := mc.localPV.GetPubKey(ctx, quorumHash)
if err != nil {
panic(err)
}
@@ -64,11 +65,11 @@ func (mc *MockClient) QuorumInfo(
PubKeyShare: pk.HexString(),
})
}
- tpk, err := mc.localPV.GetThresholdPublicKey(context.Background(), quorumHash)
+ tpk, err := mc.localPV.GetThresholdPublicKey(ctx, quorumHash)
if err != nil {
panic(err)
}
- height, err := mc.localPV.GetHeight(context.Background(), quorumHash)
+ height, err := mc.localPV.GetHeight(ctx, quorumHash)
if err != nil {
panic(err)
}
@@ -82,7 +83,8 @@ func (mc *MockClient) QuorumInfo(
}
func (mc *MockClient) MasternodeStatus() (*btcjson.MasternodeStatusResult, error) {
- proTxHash, err := mc.localPV.GetProTxHash(context.Background())
+ ctx := context.Background()
+ proTxHash, err := mc.localPV.GetProTxHash(ctx)
if err != nil {
panic(err)
}
diff --git a/dash/llmq/llmq.go b/dash/llmq/llmq.go
index 170403c37f..65cc23896a 100644
--- a/dash/llmq/llmq.go
+++ b/dash/llmq/llmq.go
@@ -1,6 +1,7 @@
package llmq
import (
+ cryptorand "crypto/rand"
"errors"
"fmt"
"io"
@@ -91,7 +92,7 @@ func Generate(proTxHashes []crypto.ProTxHash, opts ...optionFunc) (*Data, error)
conf := llmqConfig{
proTxHashes: bls12381.ReverseProTxHashes(proTxHashes),
threshold: len(proTxHashes)*2/3 + 1,
- seedReader: crypto.CReader(),
+ seedReader: cryptorand.Reader,
}
for _, opt := range opts {
opt(&conf)
diff --git a/dash/quorum/mock/dash_dialer.go b/dash/quorum/mock/dash_dialer.go
index aff3cabce4..59e4b30753 100644
--- a/dash/quorum/mock/dash_dialer.go
+++ b/dash/quorum/mock/dash_dialer.go
@@ -3,8 +3,8 @@ package mock
import (
"encoding/binary"
"encoding/hex"
+ "sync"
- "github.com/tendermint/tendermint/internal/libs/sync"
"github.com/tendermint/tendermint/internal/p2p"
"github.com/tendermint/tendermint/types"
)
diff --git a/dash/quorum/selectpeers/dip6.go b/dash/quorum/selectpeers/dip6.go
index d4621e10d8..2999bc7328 100644
--- a/dash/quorum/selectpeers/dip6.go
+++ b/dash/quorum/selectpeers/dip6.go
@@ -25,7 +25,7 @@ func NewDIP6ValidatorSelector(quorumHash bytes.HexBytes) ValidatorSelector {
return &dip6PeerSelector{quorumHash: quorumHash}
}
-// SelectValidator implements ValidtorSelector.
+// SelectValidators implements ValidtorSelector.
// SelectValidators selects some validators from `validatorSetMembers`, according to the algorithm
// described in DIP-6 https://github.com/dashpay/dips/blob/master/dip-0006.md
func (s *dip6PeerSelector) SelectValidators(
diff --git a/dash/quorum/selectpeers/sortable_validator.go b/dash/quorum/selectpeers/sortable_validator.go
index 78e86e18fb..35ff19f8f8 100644
--- a/dash/quorum/selectpeers/sortable_validator.go
+++ b/dash/quorum/selectpeers/sortable_validator.go
@@ -2,7 +2,6 @@ package selectpeers
import (
"bytes"
- "crypto/sha256"
"github.com/tendermint/tendermint/crypto"
tmbytes "github.com/tendermint/tendermint/libs/bytes"
@@ -45,6 +44,5 @@ func calculateDIP6SortKey(proTxHash, quorumHash tmbytes.HexBytes) []byte {
keyBytes := make([]byte, 0, len(proTxHash)+len(quorumHash))
keyBytes = append(keyBytes, proTxHash...)
keyBytes = append(keyBytes, quorumHash...)
- keySHA := sha256.Sum256(keyBytes)
- return keySHA[:]
+ return crypto.Checksum(keyBytes)
}
diff --git a/dash/quorum/selectpeers/sorted_validator_list.go b/dash/quorum/selectpeers/sorted_validator_list.go
index 592f902d49..5bdfb6a8b8 100644
--- a/dash/quorum/selectpeers/sorted_validator_list.go
+++ b/dash/quorum/selectpeers/sorted_validator_list.go
@@ -23,7 +23,7 @@ func newSortedValidatorList(validators []*types.Validator, quorumHash tmbytes.He
return ret
}
-// Sort() sorts this sortableValidatorList
+// Sort sorts this sortableValidatorList
func (vl sortedValidatorList) Sort() {
sort.Sort(vl)
}
diff --git a/dash/quorum/validator_conn_executor.go b/dash/quorum/validator_conn_executor.go
index 7521d27690..20de216f46 100644
--- a/dash/quorum/validator_conn_executor.go
+++ b/dash/quorum/validator_conn_executor.go
@@ -4,14 +4,16 @@ import (
"context"
"errors"
"fmt"
+ "sync"
"time"
"github.com/hashicorp/go-multierror"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/dash/quorum/selectpeers"
- "github.com/tendermint/tendermint/internal/libs/sync"
+ "github.com/tendermint/tendermint/internal/eventbus"
"github.com/tendermint/tendermint/internal/p2p"
+ tmpubsub "github.com/tendermint/tendermint/internal/pubsub"
tmbytes "github.com/tendermint/tendermint/libs/bytes"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/service"
@@ -41,11 +43,12 @@ type optionFunc func(vc *ValidatorConnExecutor) error
// Note that we mark peers that are members of active validator set as Persistent, so p2p subsystem
// will retry the connection if it fails.
type ValidatorConnExecutor struct {
- service.BaseService
+ *service.BaseService
+ logger log.Logger
proTxHash types.ProTxHash
- eventBus *types.EventBus
+ eventBus *eventbus.EventBus
dialer p2p.DashDialer
- subscription types.Subscription
+ subscription eventbus.Subscription
// validatorSetMembers contains validators active in the current Validator Set, indexed by node ID
validatorSetMembers validatorMap
@@ -72,11 +75,12 @@ var (
// Don't forget to Start() and Stop() the service.
func NewValidatorConnExecutor(
proTxHash types.ProTxHash,
- eventBus *types.EventBus,
+ eventBus *eventbus.EventBus,
connMgr p2p.DashDialer,
opts ...optionFunc,
) (*ValidatorConnExecutor, error) {
vc := &ValidatorConnExecutor{
+ logger: log.NewNopLogger(),
proTxHash: proTxHash,
eventBus: eventBus,
dialer: connMgr,
@@ -89,8 +93,7 @@ func NewValidatorConnExecutor(
resolverAddressBook: vc.dialer,
resolverTCP: NewTCPNodeIDResolver(),
}
- baseService := service.NewBaseService(log.NewNopLogger(), validatorConnExecutorName, vc)
- vc.BaseService = *baseService
+ vc.BaseService = service.NewBaseService(log.NewNopLogger(), validatorConnExecutorName, vc)
for _, opt := range opts {
err := opt(vc)
@@ -119,27 +122,27 @@ func WithValidatorsSet(valSet *types.ValidatorSet) func(vc *ValidatorConnExecuto
// WithLogger sets a logger
func WithLogger(logger log.Logger) func(vc *ValidatorConnExecutor) error {
return func(vc *ValidatorConnExecutor) error {
- vc.Logger = logger
+ vc.logger = logger
return nil
}
}
// OnStart implements Service to subscribe to Validator Update events
-func (vc *ValidatorConnExecutor) OnStart() error {
+func (vc *ValidatorConnExecutor) OnStart(ctx context.Context) error {
if err := vc.subscribe(); err != nil {
return err
}
err := vc.updateConnections()
if err != nil {
- vc.Logger.Error("Warning: ValidatorConnExecutor OnStart failed", "error", err)
+ vc.logger.Error("Warning: ValidatorConnExecutor OnStart failed", "error", err)
}
go func() {
var err error
for err == nil {
- err = vc.receiveEvents()
+ err = vc.receiveEvents(ctx)
}
- vc.Logger.Error("ValidatorConnExecutor goroutine finished", "reason", err)
+ vc.logger.Error("ValidatorConnExecutor goroutine finished", "reason", err)
}()
return nil
}
@@ -151,7 +154,7 @@ func (vc *ValidatorConnExecutor) OnStop() {
defer cancel()
err := vc.eventBus.UnsubscribeAll(ctx, validatorConnExecutorName)
if err != nil {
- vc.Logger.Error("cannot unsubscribe from channels", "error", err)
+ vc.logger.Error("cannot unsubscribe from channels", "error", err)
}
vc.eventBus = nil
}
@@ -161,11 +164,13 @@ func (vc *ValidatorConnExecutor) OnStop() {
func (vc *ValidatorConnExecutor) subscribe() error {
ctx, cancel := context.WithTimeout(context.Background(), defaultTimeout)
defer cancel()
- updatesSub, err := vc.eventBus.Subscribe(
+ updatesSub, err := vc.eventBus.SubscribeWithArgs(
ctx,
- validatorConnExecutorName,
- types.EventQueryValidatorSetUpdates,
- vc.EventBusCapacity,
+ tmpubsub.SubscribeArgs{
+ ClientID: validatorConnExecutorName,
+ Query: types.EventQueryValidatorSetUpdates,
+ Limit: vc.EventBusCapacity,
+ },
)
if err != nil {
return err
@@ -177,45 +182,46 @@ func (vc *ValidatorConnExecutor) subscribe() error {
// receiveEvents processes received events and executes all the logic.
// Returns non-nil error only if fatal error occurred and the main goroutine should be terminated.
-func (vc *ValidatorConnExecutor) receiveEvents() error {
- vc.Logger.Debug("ValidatorConnExecutor: waiting for an event")
- select {
- case msg := <-vc.subscription.Out():
- event, ok := msg.Data().(types.EventDataValidatorSetUpdate)
- if !ok {
- return fmt.Errorf("invalid type of validator set update message: %T", event)
- }
- if err := vc.handleValidatorUpdateEvent(event); err != nil {
- vc.Logger.Error("cannot handle validator update", "error", err)
- return nil // non-fatal, so no error returned to continue the loop
+func (vc *ValidatorConnExecutor) receiveEvents(ctx context.Context) error {
+ vc.logger.Debug("ValidatorConnExecutor: waiting for an event")
+ sCtx, cancel := context.WithCancel(ctx) // TODO check value for correctness
+ defer cancel()
+ msg, err := vc.subscription.Next(sCtx)
+ if err != nil {
+ if errors.Is(err, context.Canceled) {
+ return fmt.Errorf("subscription canceled due to error: %w", sCtx.Err())
}
- vc.Logger.Debug("validator updates processed successfully", "event", event)
- case <-vc.subscription.Canceled():
- return fmt.Errorf("subscription canceled due to error: %w", vc.subscription.Err())
- case <-vc.BaseService.Quit():
- return fmt.Errorf("quit signal received")
+ return err
}
-
+ event, ok := msg.Data().(types.EventDataValidatorSetUpdate)
+ if !ok {
+ return fmt.Errorf("invalid type of validator set update message: %T", event)
+ }
+ if err := vc.handleValidatorUpdateEvent(event); err != nil {
+ vc.logger.Error("cannot handle validator update", "error", err)
+ return nil // non-fatal, so no error returned to continue the loop
+ }
+ vc.logger.Debug("validator updates processed successfully", "event", event)
return nil
}
-// handleValidatorUpdateEvent checks and executes event of type EventDataValidatorSetUpdates, received from event bus.
+// handleValidatorUpdateEvent checks and executes event of type EventDataValidatorSetUpdate, received from event bus.
func (vc *ValidatorConnExecutor) handleValidatorUpdateEvent(event types.EventDataValidatorSetUpdate) error {
vc.mux.Lock()
defer vc.mux.Unlock()
if len(event.ValidatorSetUpdates) < 1 {
- vc.Logger.Debug("no validators in ValidatorUpdates")
+ vc.logger.Debug("no validators in ValidatorUpdates")
return nil // not really an error
}
vc.validatorSetMembers = newValidatorMap(event.ValidatorSetUpdates)
if len(event.QuorumHash) > 0 {
if err := vc.setQuorumHash(event.QuorumHash); err != nil {
- vc.Logger.Error("received invalid quorum hash", "error", err)
+ vc.logger.Error("received invalid quorum hash", "error", err)
return fmt.Errorf("received invalid quorum hash: %w", err)
}
} else {
- vc.Logger.Debug("received empty quorum hash")
+ vc.logger.Debug("received empty quorum hash")
}
if err := vc.updateConnections(); err != nil {
return fmt.Errorf("inter-validator set connections error: %w", err)
@@ -257,7 +263,7 @@ func (vc *ValidatorConnExecutor) resolveNodeID(va *types.ValidatorAddress) error
va.NodeID = address.NodeID
return nil // success
}
- vc.Logger.Debug(
+ vc.logger.Debug(
"warning: validator node id lookup method failed",
"url", va.String(),
"method", method,
@@ -298,7 +304,7 @@ func (vc *ValidatorConnExecutor) ensureValidatorsHaveNodeIDs(validators []*types
for _, validator := range validators {
err := vc.resolveNodeID(&validator.NodeAddress)
if err != nil {
- vc.Logger.Error("cannot determine node id for validator, skipping", "url", validator.String(), "error", err)
+ vc.logger.Error("cannot determine node id for validator, skipping", "url", validator.String(), "error", err)
continue
}
results = append(results, validator)
@@ -311,7 +317,7 @@ func (vc *ValidatorConnExecutor) disconnectValidator(validator types.Validator)
return err
}
id := validator.NodeAddress.NodeID
- vc.Logger.Debug("disconnecting Validator", "validator", validator, "id", id, "address", validator.NodeAddress.String())
+ vc.logger.Debug("disconnecting Validator", "validator", validator, "id", id, "address", validator.NodeAddress.String())
if err := vc.dialer.DisconnectAsync(id); err != nil {
return err
}
@@ -327,10 +333,10 @@ func (vc *ValidatorConnExecutor) disconnectValidators(exceptions validatorMap) e
if err := vc.disconnectValidator(validator); err != nil {
if !errors.Is(err, errPeerNotFound) {
// no return, as we see it as non-fatal
- vc.Logger.Error("cannot disconnect Validator", "error", err)
+ vc.logger.Error("cannot disconnect Validator", "error", err)
continue
}
- vc.Logger.Debug("Validator already disconnected", "error", err)
+ vc.logger.Debug("Validator already disconnected", "error", err)
// We still delete the validator from vc.connectedValidators
}
delete(vc.connectedValidators, currentKey)
@@ -350,7 +356,7 @@ func (vc *ValidatorConnExecutor) isValidator() bool {
func (vc *ValidatorConnExecutor) updateConnections() error {
// We only do something if we are part of new ValidatorSet
if !vc.isValidator() {
- vc.Logger.Debug("not a member of active ValidatorSet")
+ vc.logger.Debug("not a member of active ValidatorSet")
// We need to disconnect connected validators. It needs to be done explicitly
// because they are marked as persistent and will never disconnect themselves.
return vc.disconnectValidators(validatorMap{})
@@ -359,22 +365,22 @@ func (vc *ValidatorConnExecutor) updateConnections() error {
// Find new newValidators
newValidators, err := vc.selectValidators()
if err != nil {
- vc.Logger.Error("cannot determine list of validators to connect", "error", err)
+ vc.logger.Error("cannot determine list of validators to connect", "error", err)
// no return, as we still need to disconnect unused validators
}
// Disconnect existing validators unless they are selected to be connected again
if err := vc.disconnectValidators(newValidators); err != nil {
return fmt.Errorf("cannot disconnect unused validators: %w", err)
}
- vc.Logger.Debug("filtering validators", "validators", newValidators.String())
+ vc.logger.Debug("filtering validators", "validators", newValidators.String())
// ensure that we can connect to all validators
newValidators = vc.filterAddresses(newValidators)
// Connect to new validators
- vc.Logger.Debug("dialing validators", "validators", newValidators.String())
+ vc.logger.Debug("dialing validators", "validators", newValidators.String())
if err := vc.dial(newValidators); err != nil {
return fmt.Errorf("cannot dial validators: %w", err)
}
- vc.Logger.Debug("connected to Validators", "validators", newValidators.String())
+ vc.logger.Debug("connected to Validators", "validators", newValidators.String())
return nil
}
@@ -383,20 +389,20 @@ func (vc *ValidatorConnExecutor) filterAddresses(validators validatorMap) valida
filtered := make(validatorMap, len(validators))
for id, validator := range validators {
if vc.proTxHash != nil && string(id) == vc.proTxHash.String() {
- vc.Logger.Debug("validator is ourself", "id", id, "address", validator.NodeAddress.String())
+ vc.logger.Debug("validator is ourself", "id", id, "address", validator.NodeAddress.String())
continue
}
if err := validator.ValidateBasic(); err != nil {
- vc.Logger.Debug("validator address is invalid", "id", id, "address", validator.NodeAddress.String())
+ vc.logger.Debug("validator address is invalid", "id", id, "address", validator.NodeAddress.String())
continue
}
if vc.connectedValidators.contains(validator) {
- vc.Logger.Debug("validator already connected", "id", id)
+ vc.logger.Debug("validator already connected", "id", id)
continue
}
if vc.dialer.IsDialingOrConnected(validator.NodeAddress.NodeID) {
- vc.Logger.Debug("already dialing this validator", "id", id, "address", validator.NodeAddress.String())
+ vc.logger.Debug("already dialing this validator", "id", id, "address", validator.NodeAddress.String())
continue
}
@@ -416,7 +422,7 @@ func (vc *ValidatorConnExecutor) dial(vals validatorMap) error {
vc.connectedValidators[id] = validator
address := nodeAddress(validator.NodeAddress)
if err := vc.dialer.ConnectAsync(address); err != nil {
- vc.Logger.Error("cannot dial validator", "address", address.String(), "err", err)
+ vc.logger.Error("cannot dial validator", "address", address.String(), "err", err)
return fmt.Errorf("cannot dial validator %s: %w", address.String(), err)
}
}
diff --git a/dash/quorum/validator_conn_executor_test.go b/dash/quorum/validator_conn_executor_test.go
index 78fdbc84e9..376376fa93 100644
--- a/dash/quorum/validator_conn_executor_test.go
+++ b/dash/quorum/validator_conn_executor_test.go
@@ -7,6 +7,7 @@ import (
"time"
"github.com/stretchr/testify/assert"
+ testifymock "github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
dbm "github.com/tendermint/tm-db"
@@ -15,9 +16,11 @@ import (
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/dash/quorum/mock"
"github.com/tendermint/tendermint/dash/quorum/selectpeers"
- mmock "github.com/tendermint/tendermint/internal/mempool/mock"
+ "github.com/tendermint/tendermint/internal/eventbus"
+ "github.com/tendermint/tendermint/internal/mempool/mocks"
"github.com/tendermint/tendermint/internal/p2p"
"github.com/tendermint/tendermint/internal/proxy"
+ "github.com/tendermint/tendermint/internal/pubsub"
sm "github.com/tendermint/tendermint/internal/state"
"github.com/tendermint/tendermint/internal/store"
tmbytes "github.com/tendermint/tendermint/libs/bytes"
@@ -27,12 +30,8 @@ import (
)
const (
- mySeedID uint16 = math.MaxUint16 - 1
-)
-
-var (
+ mySeedID uint16 = math.MaxUint16 - 1
chainID = "execution_chain"
- testPartSize uint32 = 65536
nTxsPerBlock = 10
)
@@ -342,54 +341,64 @@ func TestValidatorConnExecutor_ValidatorUpdatesSequence(t *testing.T) {
// TestEndBlock verifies if ValidatorConnExecutor is called correctly during processing of EndBlock
// message from the ABCI app.
-func TestEndBlock(t *testing.T) {
+func TestFinalizeBlock(t *testing.T) {
const timeout = 3 * time.Second // how long we'll wait for connection
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+
app := newTestApp()
+ logger := log.NewTestingLogger(t)
- clientCreator := abciclient.NewLocalCreator(app)
- require.NotNil(t, clientCreator)
- proxyApp := proxy.NewAppConns(clientCreator, proxy.NopMetrics())
+ client := abciclient.NewLocalClient(logger, app)
+ require.NotNil(t, client)
+ proxyApp := proxy.New(client, logger, proxy.NopMetrics())
require.NotNil(t, proxyApp)
- err := proxyApp.Start()
+ err := proxyApp.Start(ctx)
require.Nil(t, err)
- defer proxyApp.Stop() //nolint:errcheck // ignore for tests
state, stateDB, _ := makeState(3, 1)
nodeProTxHash := state.Validators.Validators[0].ProTxHash
stateStore := sm.NewStore(stateDB)
blockStore := store.NewBlockStore(dbm.NewMemDB())
+ eventBus := eventbus.NewDefault(logger)
+ require.NoError(t, eventBus.Start(ctx))
+
+ mp := mocks.NewMempool(t)
+ mp.On("Lock").Return()
+ mp.On("Unlock").Return()
+ mp.On("FlushAppConn", testifymock.Anything).Return(nil)
+ mp.On("Update",
+ testifymock.Anything,
+ testifymock.Anything,
+ testifymock.Anything,
+ testifymock.Anything,
+ testifymock.Anything,
+ testifymock.Anything).Return(nil)
blockExec := sm.NewBlockExecutor(
stateStore,
- log.TestingLogger(),
- proxyApp.Consensus(),
- proxyApp.Query(),
- mmock.Mempool{},
+ logger,
+ proxyApp,
+ mp,
sm.EmptyEvidencePool{},
blockStore,
- nil,
+ eventBus,
+ sm.NopMetrics(),
)
- eventBus := types.NewEventBus()
- err = eventBus.Start()
- require.NoError(t, err)
- defer eventBus.Stop() //nolint:errcheck // ignore for tests
-
- blockExec.SetEventBus(eventBus)
-
- updatesSub, err := eventBus.Subscribe(
- context.Background(),
- "TestEndBlockValidatorUpdates",
- types.EventQueryValidatorSetUpdates,
+ updatesSub, err := eventBus.SubscribeWithArgs(
+ ctx,
+ pubsub.SubscribeArgs{
+ ClientID: "TestEndBlockValidatorUpdates",
+ Query: types.EventQueryValidatorSetUpdates,
+ },
)
require.NoError(t, err)
block := makeBlock(state, 1, new(types.Commit))
- blockID := types.BlockID{
- Hash: block.Hash(),
- PartSetHeader: block.MakePartSet(testPartSize).Header(),
- }
+ blockID, err := block.BlockID()
+ require.NoError(t, err)
vals := state.Validators
proTxHashes := vals.GetProTxHashes()
@@ -411,13 +420,12 @@ func TestEndBlock(t *testing.T) {
proTxHash := newVals.Validators[0].ProTxHash
vc, err := NewValidatorConnExecutor(proTxHash, eventBus, sw)
require.NoError(t, err)
- err = vc.Start()
+ err = vc.Start(ctx)
require.NoError(t, err)
- defer func() { err := vc.Stop(); require.NoError(t, err) }()
app.ValidatorSetUpdates[1] = newVals.ABCIEquivalentValidatorUpdates()
- state, err = blockExec.ApplyBlock(state, nodeProTxHash, blockID, block)
+ state, err = blockExec.ApplyBlock(ctx, state, nodeProTxHash, blockID, block)
require.Nil(t, err)
// test new validator was added to NextValidators
require.Equal(t, state.Validators.Size()+100, state.NextValidators.Size())
@@ -426,31 +434,29 @@ func TestEndBlock(t *testing.T) {
assert.Contains(t, nextValidatorsProTxHashes, addProTxHash)
}
+ sCtx, sCancel := context.WithTimeout(ctx, 1*time.Second)
+ defer sCancel()
// test we threw an event
- select {
- case msg := <-updatesSub.Out():
- event, ok := msg.Data().(types.EventDataValidatorSetUpdate)
- require.True(
+ msg, err := updatesSub.Next(sCtx)
+ require.NoError(t, err)
+
+ event, ok := msg.Data().(types.EventDataValidatorSetUpdate)
+ require.True(
+ t,
+ ok,
+ "Expected event of type EventDataValidatorSetUpdate, got %T",
+ msg.Data(),
+ )
+ if assert.NotEmpty(t, event.ValidatorSetUpdates) {
+ for _, addProTxHash := range addProTxHashes {
+ assert.Contains(t, mock.ValidatorsProTxHashes(event.ValidatorSetUpdates), addProTxHash)
+ }
+ assert.EqualValues(
t,
- ok,
- "Expected event of type EventDataValidatorSetUpdates, got %T",
- msg.Data(),
+ types.DefaultDashVotingPower,
+ event.ValidatorSetUpdates[1].VotingPower,
)
- if assert.NotEmpty(t, event.ValidatorSetUpdates) {
- for _, addProTxHash := range addProTxHashes {
- assert.Contains(t, mock.ValidatorsProTxHashes(event.ValidatorSetUpdates), addProTxHash)
- }
- assert.EqualValues(
- t,
- types.DefaultDashVotingPower,
- event.ValidatorSetUpdates[1].VotingPower,
- )
- assert.NotEmpty(t, event.QuorumHash)
- }
- case <-updatesSub.Canceled():
- t.Fatalf("updatesSub was canceled (reason: %v)", updatesSub.Err())
- case <-time.After(1 * time.Second):
- t.Fatal("Did not receive EventValidatorSetUpdates within 1 sec.")
+ assert.NotEmpty(t, event.QuorumHash)
}
// ensure some history got generated inside the Switch; we expect 1 dial event
@@ -475,7 +481,10 @@ func executeTestCase(t *testing.T, tc testCase) {
// const TIMEOUT = 100 * time.Millisecond
const TIMEOUT = 5 * time.Second
- eventBus, sw, vc := setup(t, tc.me)
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+
+ eventBus, sw, vc := setup(ctx, t, tc.me)
defer cleanup(t, eventBus, sw, vc)
for updateID, update := range tc.validatorUpdates {
@@ -560,28 +569,29 @@ func allowedParamsDefaults(
// setup creates ValidatorConnExecutor and some dependencies.
// Use `defer cleanup()` to free the resources.
func setup(
+ ctx context.Context,
t *testing.T,
me *types.Validator,
-) (eventBus *types.EventBus, sw *mock.DashDialer, vc *ValidatorConnExecutor) {
- eventBus = types.NewEventBus()
- err := eventBus.Start()
+) (eventBus *eventbus.EventBus, sw *mock.DashDialer, vc *ValidatorConnExecutor) {
+ logger := log.NewTestingLogger(t)
+ eventBus = eventbus.NewDefault(logger)
+ err := eventBus.Start(ctx)
require.NoError(t, err)
sw = mock.NewDashDialer()
- proTxHash := me.ProTxHash
- vc, err = NewValidatorConnExecutor(proTxHash, eventBus, sw, WithLogger(log.TestingLogger()))
+ vc, err = NewValidatorConnExecutor(me.ProTxHash, eventBus, sw, WithLogger(logger))
require.NoError(t, err)
- err = vc.Start()
+ err = vc.Start(ctx)
require.NoError(t, err)
return eventBus, sw, vc
}
// cleanup frees some resources allocated for tests
-func cleanup(t *testing.T, bus *types.EventBus, dialer p2p.DashDialer, vc *ValidatorConnExecutor) {
- assert.NoError(t, bus.Stop())
- assert.NoError(t, vc.Stop())
+func cleanup(t *testing.T, bus *eventbus.EventBus, dialer p2p.DashDialer, vc *ValidatorConnExecutor) {
+ bus.Stop()
+ vc.Stop()
}
// SOME UTILS //
@@ -629,9 +639,10 @@ func makeState(nVals int, height int64) (sm.State, dbm.DB, map[string]types.Priv
}
func makeBlock(state sm.State, height int64, commit *types.Commit) *types.Block {
- block, _ := state.MakeBlock(height, nil, makeTxs(state.LastBlockHeight),
- commit, nil, state.Validators.GetProposer().ProTxHash, 0)
- return block
+ return state.MakeBlock(
+ height, nil, makeTxs(state.LastBlockHeight),
+ commit, nil, state.Validators.GetProposer().ProTxHash, 0,
+ )
}
// TEST APP //
@@ -640,48 +651,47 @@ func makeBlock(state sm.State, height int64, commit *types.Commit) *types.Block
type testApp struct {
abci.BaseApplication
- ByzantineValidators []abci.Evidence
+ ByzantineValidators []abci.Misbehavior
ValidatorSetUpdates map[int64]*abci.ValidatorSetUpdate
}
func newTestApp() *testApp {
return &testApp{
- ByzantineValidators: []abci.Evidence{},
+ ByzantineValidators: []abci.Misbehavior{},
ValidatorSetUpdates: map[int64]*abci.ValidatorSetUpdate{},
}
}
var _ abci.Application = (*testApp)(nil)
-func (app *testApp) Info(req abci.RequestInfo) (resInfo abci.ResponseInfo) {
- return abci.ResponseInfo{}
+func (app *testApp) Info(context.Context, *abci.RequestInfo) (*abci.ResponseInfo, error) {
+ return &abci.ResponseInfo{}, nil
}
-func (app *testApp) BeginBlock(req abci.RequestBeginBlock) abci.ResponseBeginBlock {
+func (app *testApp) FinalizeBlock(_ context.Context, req *abci.RequestFinalizeBlock) (*abci.ResponseFinalizeBlock, error) {
app.ByzantineValidators = req.ByzantineValidators
- return abci.ResponseBeginBlock{}
-}
-
-func (app *testApp) EndBlock(req abci.RequestEndBlock) abci.ResponseEndBlock {
- return abci.ResponseEndBlock{
+ txs := make([]*abci.ExecTxResult, 0, len(req.Txs))
+ for _, tx := range req.Txs {
+ txs = append(txs, &abci.ExecTxResult{Data: tx})
+ }
+ return &abci.ResponseFinalizeBlock{
+ Events: []abci.Event{},
+ TxResults: txs,
ValidatorSetUpdate: app.ValidatorSetUpdates[req.Height],
ConsensusParamUpdates: &tmproto.ConsensusParams{
- Version: &tmproto.VersionParams{
- AppVersion: 1}}}
-}
-
-func (app *testApp) DeliverTx(req abci.RequestDeliverTx) abci.ResponseDeliverTx {
- return abci.ResponseDeliverTx{Events: []abci.Event{}}
+ Version: &tmproto.VersionParams{AppVersion: 1},
+ },
+ }, nil
}
-func (app *testApp) CheckTx(req abci.RequestCheckTx) abci.ResponseCheckTx {
- return abci.ResponseCheckTx{}
+func (app *testApp) CheckTx(_ context.Context, req *abci.RequestCheckTx) (*abci.ResponseCheckTx, error) {
+ return &abci.ResponseCheckTx{Code: abci.CodeTypeOK}, nil
}
-func (app *testApp) Commit() abci.ResponseCommit {
- return abci.ResponseCommit{RetainHeight: 1}
+func (app *testApp) Commit(_ context.Context) (*abci.ResponseCommit, error) {
+ return &abci.ResponseCommit{RetainHeight: 1}, nil
}
-func (app *testApp) Query(reqQuery abci.RequestQuery) (resQuery abci.ResponseQuery) {
- return
+func (app *testApp) Query(_ context.Context, req *abci.RequestQuery) (*abci.ResponseQuery, error) {
+ return &abci.ResponseQuery{}, nil
}
diff --git a/docs/DOCS_README.md b/docs/DOCS_README.md
index c1ab1580ab..da06785d57 100644
--- a/docs/DOCS_README.md
+++ b/docs/DOCS_README.md
@@ -11,9 +11,9 @@ and other supported release branches.
There is a [GitHub Actions workflow](https://github.com/tendermint/docs/actions/workflows/deployment.yml)
in the `tendermint/docs` repository that clones and builds the documentation
-site from the contents of this `docs` directory, for `master` and for each
-supported release branch. Under the hood, this workflow runs `make build-docs`
-from the [Makefile](../Makefile#L214).
+site from the contents of this `docs` directory, for `master` and for the
+backport branch of each supported release. Under the hood, this workflow runs
+`make build-docs` from the [Makefile](../Makefile#L214).
The list of supported versions are defined in [`config.js`](./.vuepress/config.js),
which defines the UI menu on the documentation site, and also in
diff --git a/docs/README.md b/docs/README.md
index a9b6925323..3137d611a7 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -21,7 +21,7 @@ Tendermint?](introduction/what-is-tendermint.md).
To get started quickly with an example application, see the [quick start guide](introduction/quick-start.md).
-To learn about application development on Tendermint, see the [Application Blockchain Interface](https://github.com/tendermint/spec/tree/master/spec/abci).
+To learn about application development on Tendermint, see the [Application Blockchain Interface](../spec/abci).
For more details on using Tendermint, see the respective documentation for
[Tendermint Core](tendermint-core/), [benchmarking and monitoring](tools/), and [network deployments](nodes/).
diff --git a/docs/app-dev/abci-cli.md b/docs/app-dev/abci-cli.md
index 9768c32950..7649b7cde7 100644
--- a/docs/app-dev/abci-cli.md
+++ b/docs/app-dev/abci-cli.md
@@ -27,17 +27,17 @@ Usage:
abci-cli [command]
Available Commands:
- batch Run a batch of abci commands against an application
- check_tx Validate a tx
- commit Commit the application state and return the Merkle root hash
- console Start an interactive abci console for multiple commands
- deliver_tx Deliver a new tx to the application
- kvstore ABCI demo example
- echo Have the application echo a message
- help Help about any command
- info Get some info about the application
- query Query the application state
- set_option Set an options on the application
+ batch Run a batch of abci commands against an application
+ check_tx Validate a tx
+ commit Commit the application state and return the Merkle root hash
+ console Start an interactive abci console for multiple commands
+ finalize_block Send a set of transactions to the application
+ kvstore ABCI demo example
+ echo Have the application echo a message
+ help Help about any command
+ info Get some info about the application
+ query Query the application state
+ set_option Set an options on the application
Flags:
--abci string socket or grpc (default "socket")
@@ -53,7 +53,7 @@ Use "abci-cli [command] --help" for more information about a command.
The `abci-cli` tool lets us send ABCI messages to our application, to
help build and debug them.
-The most important messages are `deliver_tx`, `check_tx`, and `commit`,
+The most important messages are `finalize_block`, `check_tx`, and `commit`,
but there are others for convenience, configuration, and information
purposes.
@@ -83,19 +83,19 @@ func cmdKVStore(cmd *cobra.Command, args []string) error {
if err != nil {
return err
}
+
+ // Stop upon receiving SIGTERM or CTRL-C.
+ ctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)
+ defer cancel()
+
srv.SetLogger(logger.With("module", "abci-server"))
- if err := srv.Start(); err != nil {
+ if err := srv.Start(ctx); err != nil {
return err
}
- // Stop upon receiving SIGTERM or CTRL-C.
- tmos.TrapSignal(logger, func() {
- // Cleanup
- srv.Stop()
- })
-
- // Run forever.
- select {}
+ // Run until shutdown.
+<-ctx.Done()
+srv.Wait()
}
```
@@ -173,7 +173,7 @@ Try running these commands:
-> code: OK
-> data.hex: 0x0000000000000000
-> deliver_tx "abc"
+> finalize_block "abc"
-> code: OK
> info
@@ -192,7 +192,7 @@ Try running these commands:
-> value: abc
-> value.hex: 616263
-> deliver_tx "def=xyz"
+> finalize_block "def=xyz"
-> code: OK
> commit
@@ -207,8 +207,8 @@ Try running these commands:
-> value.hex: 78797A
```
-Note that if we do `deliver_tx "abc"` it will store `(abc, abc)`, but if
-we do `deliver_tx "abc=efg"` it will store `(abc, efg)`.
+Note that if we do `finalize_block "abc"` it will store `(abc, abc)`, but if
+we do `finalize_block "abc=efg"` it will store `(abc, efg)`.
Similarly, you could put the commands in a file and run
`abci-cli --verbose batch < myfile`.
diff --git a/docs/app-dev/app-architecture.md b/docs/app-dev/app-architecture.md
index ec2822688c..f478547bca 100644
--- a/docs/app-dev/app-architecture.md
+++ b/docs/app-dev/app-architecture.md
@@ -57,4 +57,4 @@ See the following for more extensive documentation:
- [Interchain Standard for the Light-Client REST API](https://github.com/cosmos/cosmos-sdk/pull/1028)
- [Tendermint RPC Docs](https://docs.tendermint.com/master/rpc/)
- [Tendermint in Production](../tendermint-core/running-in-production.md)
-- [ABCI spec](https://github.com/tendermint/spec/tree/95cf253b6df623066ff7cd4074a94e7a3f147c7a/spec/abci)
+- [ABCI spec](https://github.com/tendermint/tendermint/tree/95cf253b6df623066ff7cd4074a94e7a3f147c7a/spec/abci)
diff --git a/docs/app-dev/getting-started.md b/docs/app-dev/getting-started.md
index 2f5739e0f1..a480137cac 100644
--- a/docs/app-dev/getting-started.md
+++ b/docs/app-dev/getting-started.md
@@ -96,25 +96,21 @@ like:
```json
{
- "jsonrpc": "2.0",
- "id": "",
- "result": {
- "check_tx": {},
- "deliver_tx": {
- "tags": [
- {
- "key": "YXBwLmNyZWF0b3I=",
- "value": "amFl"
- },
- {
- "key": "YXBwLmtleQ==",
- "value": "YWJjZA=="
- }
- ]
- },
- "hash": "9DF66553F98DE3C26E3C3317A3E4CED54F714E39",
- "height": 14
- }
+ "check_tx": { ... },
+ "deliver_tx": {
+ "tags": [
+ {
+ "key": "YXBwLmNyZWF0b3I=",
+ "value": "amFl"
+ },
+ {
+ "key": "YXBwLmtleQ==",
+ "value": "YWJjZA=="
+ }
+ ]
+ },
+ "hash": "9DF66553F98DE3C26E3C3317A3E4CED54F714E39",
+ "height": 14
}
```
@@ -129,15 +125,11 @@ The result should look like:
```json
{
- "jsonrpc": "2.0",
- "id": "",
- "result": {
- "response": {
- "log": "exists",
- "index": "-1",
- "key": "YWJjZA==",
- "value": "YWJjZA=="
- }
+ "response": {
+ "log": "exists",
+ "index": "-1",
+ "key": "YWJjZA==",
+ "value": "YWJjZA=="
}
}
```
@@ -190,7 +182,7 @@ node example/counter.js
In another window, reset and start `tendermint`:
```sh
-tendermint unsafe-reset-all
+tendermint reset unsafe-all
tendermint start
```
diff --git a/docs/app-dev/indexing-transactions.md b/docs/app-dev/indexing-transactions.md
index b8b06d01b9..67d17c8794 100644
--- a/docs/app-dev/indexing-transactions.md
+++ b/docs/app-dev/indexing-transactions.md
@@ -15,7 +15,7 @@ the block itself is never stored.
Each event contains a type and a list of attributes, which are key-value pairs
denoting something about what happened during the method's execution. For more
details on `Events`, see the
-[ABCI](https://github.com/tendermint/spec/blob/master/spec/abci/abci.md#events)
+[ABCI](https://github.com/tendermint/tendermint/blob/master/spec/abci/abci.md#events)
documentation.
An `Event` has a composite key associated with it. A `compositeKey` is
diff --git a/docs/app-dev/readme.md b/docs/app-dev/readme.md
index 51e88fc34a..46ce06ca00 100644
--- a/docs/app-dev/readme.md
+++ b/docs/app-dev/readme.md
@@ -1,7 +1,6 @@
---
order: false
parent:
+ title: "Building Applications"
order: 3
----
-
-# Apps
+---
\ No newline at end of file
diff --git a/docs/architecture/adr-073-libp2p.md b/docs/architecture/adr-073-libp2p.md
new file mode 100644
index 0000000000..080fecbcdf
--- /dev/null
+++ b/docs/architecture/adr-073-libp2p.md
@@ -0,0 +1,235 @@
+# ADR 073: Adopt LibP2P
+
+## Changelog
+
+- 2021-11-02: Initial Draft (@tychoish)
+
+## Status
+
+Proposed.
+
+## Context
+
+
+As part of the 0.35 development cycle, the Tendermint team completed
+the first phase of the work described in ADRs 61 and 62, which included a
+large scale refactoring of the reactors and the p2p message
+routing. This replaced the switch and many of the other legacy
+components without breaking protocol or network-level
+interoperability and left the legacy connection/socket handling code.
+
+Following the release, the team has reexamined the state of the code
+and the design, as well as Tendermint's requirements. The notes
+from that process are available in the [P2P Roadmap
+RFC][rfc].
+
+This ADR supersedes the decisions made in ADRs 60 and 61, but
+builds on the completed portions of this work. Previously, the
+boundaries of peer management, message handling, and the higher level
+business logic (e.g., "the reactors") were intermingled, and core
+elements of the p2p system were responsible for the orchestration of
+higher-level business logic. Refactoring the legacy components
+made it more obvious that this entanglement of responsibilities
+had outsized influence on the entire implementation, making
+it difficult to iterate within the current abstractions.
+It would not be viable to maintain interoperability with legacy
+systems while also achieving many of our broader objectives.
+
+LibP2P is a thoroughly-specified implementation of a peer-to-peer
+networking stack, designed specifically for systems such as
+ours. Adopting LibP2P as the basis of Tendermint will allow the
+Tendermint team to focus more of their time on other differentiating
+aspects of the system, and make it possible for the ecosystem as a
+whole to take advantage of tooling and efforts of the LibP2P
+platform.
+
+## Alternative Approaches
+
+As discussed in the [P2P Roadmap RFC][rfc], the primary alternative would be to
+continue development of Tendermint's home-grown peer-to-peer
+layer. While that would give the Tendermint team maximal control
+over the peer system, the current design is unexceptional on its
+own merits, and the prospective maintenance burden for this system
+exceeds our tolerances for the medium term.
+
+Tendermint can and should differentiate itself not on the basis of
+its networking implementation or peer management tools, but providing
+a consistent operator experience, a battle-tested consensus algorithm,
+and an ergonomic user experience.
+
+## Decision
+
+Tendermint will adopt libp2p during the 0.37 development cycle,
+replacing the bespoke Tendermint P2P stack. This will remove the
+`Endpoint`, `Transport`, `Connection`, and `PeerManager` abstractions
+and leave the reactors, `p2p.Router` and `p2p.Channel`
+abstractions.
+
+LibP2P may obviate the need for a dedicated peer exchange (PEX)
+reactor, which would also in turn obviate the need for a dedicated
+seed mode. If this is the case, then all of this functionality would
+be removed.
+
+If it turns out (based on the advice of Protocol Labs) that it makes
+sense to maintain separate pubsub or gossipsub topics
+per-message-type, then the `Router` abstraction could also
+be entirely subsumed.
+
+## Detailed Design
+
+### Implementation Changes
+
+The seams in the P2P implementation between the higher level
+constructs (reactors), the routing layer (`Router`) and the lower
+level connection and peer management code make this operation
+relatively straightforward to implement. A key
+goal in this design is to minimize the impact on the reactors
+(potentially entirely,) and completely remove the lower level
+components (e.g., `Transport`, `Connection` and `PeerManager`) using the
+separation afforded by the `Router` layer. The current state of the
+code makes these changes relatively surgical, and limited to a small
+number of methods:
+
+- `p2p.Router.OpenChannel` will still return a `Channel` structure
+ which will continue to serve as a pipe between the reactors and the
+ `Router`. The implementation will no longer need the queue
+ implementation, and will instead start goroutines that
+ are responsible for routing the messages from the channel to libp2p
+ fundamentals, replacing the current `p2p.Router.routeChannel`.
+
+- The current `p2p.Router.dialPeers` and `p2p.Router.acceptPeers`,
+ are responsible for establishing outbound and inbound connections,
+ respectively. These methods will be removed, along with
+ `p2p.Router.openConnection`, and the libp2p connection manager will
+ be responsible for maintaining network connectivity.
+
+- The `p2p.Channel` interface will change to replace Go
+ channels with a more functional interface for sending messages.
+ New methods on this object will take contexts to support safe
+ cancellation, and return errors, and will block rather than
+ running asynchronously. The `Out` channel through which
+ reactors send messages to Peers, will be replaced by a `Send`
+ method, and the Error channel will be replaced by an `Error`
+ method.
+
+- Reactors will be passed an interface that will allow them to
+ access Peer information from libp2p. This will supplant the
+ `p2p.PeerUpdates` subscription.
+
+- Add some kind of heartbeat message at the application level
+ (e.g. with a reactor,) potentially connected to libp2p's DHT to be
+ used by reactors for service discovery, message targeting, or other
+ features.
+
+- Replace the existing/legacy handshake protocol with [Noise](http://www.noiseprotocol.org/noise.html).
+
+This project will initially use the TCP-based transport protocols within
+libp2p. QUIC is also available as an option that we may implement later.
+We will not support mixed networks in the initial release, but will
+revisit that possibility later if there is a demonstrated need.
+
+### Upgrade and Compatibility
+
+Because the routers and all current P2P libraries are `internal`
+packages and not part of the public API, the only changes to the public
+API surface area of Tendermint will be different configuration
+file options, replacing the current P2P options with options relevant
+to libp2p.
+
+However, it will not be possible to run a network with both networking
+stacks active at once, so the upgrade to the version of Tendermint
+will need to be coordinated between all nodes of the network. This is
+consistent with the expectations around upgrades for Tendermint moving
+forward, and will help manage both the complexity of the project and
+the implementation timeline.
+
+## Open Questions
+
+- What is the role of Protocol Labs in the implementation of libp2p in
+ tendermint, both during the initial implementation and on an ongoing
+ basis thereafter?
+
+- Should all P2P traffic for a given node be pushed to a single topic,
+ so that a topic maps to a specific ChainID, or should
+ each reactor (or type of message) have its own topic? How many
+ topics can a libp2p network support? Is there testing that validates
+ the capabilities?
+
+- Tendermint presently provides a very coarse QoS-like functionality
+ using priorities based on message-type.
+ This intuitively/theoretically ensures that evidence and consensus
+ messages don't get starved by blocksync/statesync messages. It's
+ unclear if we can or should attempt to replicate this with libp2p.
+
+- What kind of QoS functionality does libp2p provide and what kind of
+ metrics does libp2p provide about it's QoS functionality?
+
+- Is it possible to store additional (and potentially arbitrary)
+ information into the DHT as part of the heartbeats between nodes,
+ such as the latest height, and then access that in the
+ reactors. How frequently can the DHT be updated?
+
+- Does it make sense to have reactors continue to consume inbound
+ messages from a Channel (`In`) or is there another interface or
+ pattern that we should consider?
+
+ - We should avoid exposing Go channels when possible, and likely
+ some kind of alternate iterator likely makes sense for processing
+ messages within the reactors.
+
+- What are the security and protocol implications of tracking
+ information from peer heartbeats and exposing that to reactors?
+
+- How much (or how little) configuration can Tendermint provide for
+ libp2p, particularly on the first release?
+
+ - In general, we should not support byo-functionality for libp2p
+ components within Tendermint, and reduce the configuration surface
+ area, as much as possible.
+
+- What are the best ways to provide request/response semantics for
+ reactors on top of libp2p? Will it be possible to add
+ request/response semantics in a future release or is there
+ anticipatory work that needs to be done as part of the initial
+ release?
+
+## Consequences
+
+### Positive
+
+- Reduce the maintenance burden for the Tendermint Core team by
+ removing a large swath of legacy code that has proven to be
+ difficult to modify safely.
+
+- Remove the responsibility for maintaining and developing the entire
+ peer management system (p2p) and stack.
+
+- Provide users with a more stable peer and networking system,
+ Tendermint can improve operator experience and network stability.
+
+### Negative
+
+- By deferring to library implementations for peer management and
+ networking, Tendermint loses some flexibility for innovating at the
+ peer and networking level. However, Tendermint should be innovating
+ primarily at the consensus layer, and libp2p does not preclude
+ optimization or development in the peer layer.
+
+- Libp2p is a large dependency and Tendermint would become dependent
+ upon Protocol Labs' release cycle and prioritization for bug
+ fixes. If this proves onerous, it's possible to maintain a vendor
+ fork of relevant components as needed.
+
+### Neutral
+
+- N/A
+
+## References
+
+- [ADR 61: P2P Refactor Scope][adr61]
+- [ADR 62: P2P Architecture][adr62]
+- [P2P Roadmap RFC][rfc]
+
+[adr61]: ./adr-061-p2p-refactor-scope.md
+[adr62]: ./adr-062-p2p-architecture.md
+[rfc]: ../rfc/rfc-000-p2p-roadmap.rst
diff --git a/docs/architecture/adr-074-timeout-params.md b/docs/architecture/adr-074-timeout-params.md
new file mode 100644
index 0000000000..22fd784bd9
--- /dev/null
+++ b/docs/architecture/adr-074-timeout-params.md
@@ -0,0 +1,203 @@
+# ADR 74: Migrate Timeout Parameters to Consensus Parameters
+
+## Changelog
+
+- 03-Jan-2022: Initial draft (@williambanfield)
+- 13-Jan-2022: Updated to indicate work on upgrade path needed (@williambanfield)
+
+## Status
+
+Proposed
+
+## Context
+
+### Background
+
+Tendermint's consensus timeout parameters are currently configured locally by each validator
+in the validator's [config.toml][config-toml].
+This means that the validators on a Tendermint network may have different timeouts
+from each other. There is no reason for validators on the same network to configure
+different timeout values. Proper functioning of the Tendermint consensus algorithm
+relies on these parameters being uniform across validators.
+
+The configurable values are as follows:
+
+* `TimeoutPropose`
+ * How long the consensus algorithm waits for a proposal block before issuing a prevote.
+ * If no prevote arrives by `TimeoutPropose`, then the consensus algorithm will issue a nil prevote.
+* `TimeoutProposeDelta`
+ * How much the `TimeoutPropose` grows each round.
+* `TimeoutPrevote`
+ * How long the consensus algorithm waits after receiving +2/3 prevotes with
+ no quorum for a value before issuing a precommit for nil.
+ (See the [arXiv paper][arxiv-paper], Algorithm 1, Line 34)
+* `TimeoutPrevoteDelta`
+ * How much the `TimeoutPrevote` increases with each round.
+* `TimeoutPrecommit`
+ * How long the consensus algorithm waits after receiving +2/3 precommits that
+ do not have a quorum for a value before entering the next round.
+ (See the [arXiv paper][arxiv-paper], Algorithm 1, Line 47)
+* `TimeoutPrecommitDelta`
+ * How much the `TimeoutPrecommit` increases with each round.
+* `TimeoutCommit`
+ * How long the consensus algorithm waits after committing a block but before starting the new height.
+ * This gives a validator a chance to receive slow precommits.
+* `SkipTimeoutCommit`
+ * Make progress as soon as the node has 100% of the precommits.
+
+
+### Overview of Change
+
+We will consolidate the timeout parameters and migrate them from the node-local
+`config.toml` file into the network-global consensus parameters.
+
+The 8 timeout parameters will be consolidated down to 6. These will be as follows:
+
+* `TimeoutPropose`
+ * Same as current `TimeoutPropose`.
+* `TimeoutProposeDelta`
+ * Same as current `TimeoutProposeDelta`.
+* `TimeoutVote`
+ * How long validators wait for votes in both the prevote
+ and precommit phase of the consensus algorithm. This parameter subsumes
+ the current `TimeoutPrevote` and `TimeoutPrecommit` parameters.
+* `TimeoutVoteDelta`
+ * How much the `TimeoutVote` will grow each successive round.
+ This parameter subsumes the current `TimeoutPrevoteDelta` and `TimeoutPrecommitDelta`
+ parameters.
+* `TimeoutCommit`
+ * Same as current `TimeoutCommit`.
+* `BypassCommitTimeout`
+ * Same as current `SkipTimeoutCommit`, renamed for clarity.
+
+A safe default will be provided by Tendermint for each of these parameters and
+networks will be able to update the parameters as they see fit. Local updates
+to these parameters will no longer be possible; instead, the application will control
+updating the parameters. Applications using the Cosmos SDK will be automatically be
+able to change the values of these consensus parameters [via a governance proposal][cosmos-sdk-consensus-params].
+
+This change is low-risk. While parameters are locally configurable, many running chains
+do not change them from their default values. For example, initializing
+a node on Osmosis, Terra, and the Cosmos Hub using the their `init` command produces
+a `config.toml` with Tendermint's default values for these parameters.
+
+### Why this parameter consolidation?
+
+Reducing the number of parameters is good for UX. Fewer superfluous parameters makes
+running and operating a Tendermint network less confusing.
+
+The Prevote and Precommit messages are both similar sizes, require similar amounts
+of processing so there is no strong need for them to be configured separately.
+
+The `TimeoutPropose` parameter governs how long Tendermint will wait for the proposed
+block to be gossiped. Blocks are much larger than votes and therefore tend to be
+gossiped much more slowly. It therefore makes sense to keep `TimeoutPropose` and
+the `TimeoutProposeDelta` as parameters separate from the vote timeouts.
+
+`TimeoutCommit` is used by chains to ensure that the network waits for the votes from
+slower validators before proceeding to the next height. Without this timeout, the votes
+from slower validators would consistently not be included in blocks and those validators
+would not be counted as 'up' from the chain's perspective. Being down damages a validator's
+reputation and causes potential stakers to think twice before delegating to that validator.
+
+`TimeoutCommit` also prevents the network from producing the next height as soon as validators
+on the fastest hardware with a summed voting power of +2/3 of the network's total have
+completed execution of the block. Allowing the network to proceed as soon as the fastest
++2/3 completed execution would have a cumulative effect over heights, eventually
+leaving slower validators unable to participate in consensus at all. `TimeoutCommit`
+therefore allows networks to have greater variability in hardware. Additional
+discussion of this can be found in [tendermint issue 5911][tendermint-issue-5911-comment]
+and [spec issue 359][spec-issue-359].
+
+## Alternative Approaches
+
+### Hardcode the parameters
+
+Many Tendermint networks run on similar cloud-hosted infrastructure. Therefore,
+they have similar bandwidth and machine resources. The timings for propagating votes
+and blocks are likely to be reasonably similar across networks. As a result, the
+timeout parameters are good candidates for being hardcoded. Hardcoding the timeouts
+in Tendermint would mean entirely removing these parameters from any configuration
+that could be altered by either an application or a node operator. Instead,
+Tendermint would ship with a set of timeouts and all applications using Tendermint
+would use this exact same set of values.
+
+While Tendermint nodes often run with similar bandwidth and on similar cloud-hosted
+machines, there are enough points of variability to make configuring
+consensus timeouts meaningful. Namely, Tendermint network topologies are likely to be
+very different from chain to chain. Additionally, applications may vary greatly in
+how long the `Commit` phase may take. Applications that perform more work during `Commit`
+require a longer `TimeoutCommit` to allow the application to complete its work
+and be prepared for the next height.
+
+## Decision
+
+The decision has been made to implement this work, with the caveat that the
+specific mechanism for introducing the new parameters to chains is still ongoing.
+
+## Detailed Design
+
+### New Consensus Parameters
+
+A new `TimeoutParams` `message` will be added to the [params.proto file][consensus-params-proto].
+This message will have the following form:
+
+```proto
+message TimeoutParams {
+ google.protobuf.Duration propose = 1;
+ google.protobuf.Duration propose_delta = 2;
+ google.protobuf.Duration vote = 3;
+ google.protobuf.Duration vote_delta = 4;
+ google.protobuf.Duration commit = 5;
+ bool bypass_commit_timeout = 6;
+}
+```
+
+This new message will be added as a field into the [`ConsensusParams`
+message][consensus-params-proto]. The same default values that are [currently
+set for these parameters][current-timeout-defaults] in the local configuration
+file will be used as the defaults for these new consensus parameters in the
+[consensus parameter defaults][default-consensus-params].
+
+The new consensus parameters will be subject to the same
+[validity rules][time-param-validation] as the current configuration values,
+namely, each value must be non-negative.
+
+### Migration
+
+The new `ConsensusParameters` will be added during an upcoming release. In this
+release, the old `config.toml` parameters will cease to control the timeouts and
+an error will be logged on nodes that continue to specify these values. The specific
+mechanism by which these parameters will added to a chain is being discussed in
+[RFC-009][rfc-009] and will be decided ahead of the next release.
+
+The specific mechanism for adding these parameters depends on work related to
+[soft upgrades][soft-upgrades], which is still ongoing.
+
+## Consequences
+
+### Positive
+
+* Timeout parameters will be equal across all of the validators in a Tendermint network.
+* Remove superfluous timeout parameters.
+
+### Negative
+
+### Neutral
+
+* Timeout parameters require consensus to change.
+
+## References
+
+[conseusus-params-proto]: https://github.com/tendermint/spec/blob/a00de7199f5558cdd6245bbbcd1d8405ccfb8129/proto/tendermint/types/params.proto#L11
+[hashed-params]: https://github.com/tendermint/tendermint/blob/7cdf560173dee6773b80d1c574a06489d4c394fe/types/params.go#L49
+[default-consensus-params]: https://github.com/tendermint/tendermint/blob/7cdf560173dee6773b80d1c574a06489d4c394fe/types/params.go#L79
+[current-timeout-defaults]: https://github.com/tendermint/tendermint/blob/7cdf560173dee6773b80d1c574a06489d4c394fe/config/config.go#L955
+[config-toml]: https://github.com/tendermint/tendermint/blob/5cc980698a3402afce76b26693ab54b8f67f038b/config/toml.go#L425-L440
+[cosmos-sdk-consensus-params]: https://github.com/cosmos/cosmos-sdk/issues/6197
+[time-param-validation]: https://github.com/tendermint/tendermint/blob/7cdf560173dee6773b80d1c574a06489d4c394fe/config/config.go#L1038
+[tendermint-issue-5911-comment]: https://github.com/tendermint/tendermint/issues/5911#issuecomment-973560381
+[spec-issue-359]: https://github.com/tendermint/spec/issues/359
+[arxiv-paper]: https://arxiv.org/pdf/1807.04938.pdf
+[soft-upgrades]: https://github.com/tendermint/spec/pull/222
+[rfc-009]: https://github.com/tendermint/tendermint/pull/7524
diff --git a/docs/architecture/adr-075-rpc-subscription.md b/docs/architecture/adr-075-rpc-subscription.md
new file mode 100644
index 0000000000..1ca48e7123
--- /dev/null
+++ b/docs/architecture/adr-075-rpc-subscription.md
@@ -0,0 +1,684 @@
+# ADR 075: RPC Event Subscription Interface
+
+## Changelog
+
+- 01-Mar-2022: Update long-polling interface (@creachadair).
+- 10-Feb-2022: Updates to reflect implementation.
+- 26-Jan-2022: Marked accepted.
+- 22-Jan-2022: Updated and expanded (@creachadair).
+- 20-Nov-2021: Initial draft (@creachadair).
+
+---
+## Status
+
+Accepted
+
+---
+## Background & Context
+
+For context, see [RFC 006: Event Subscription][rfc006].
+
+The [Tendermint RPC service][rpc-service] permits clients to subscribe to the
+event stream generated by a consensus node. This allows clients to observe the
+state of the consensus network, including details of the consensus algorithm
+state machine, proposals, transaction delivery, and block completion. The
+application may also attach custom key-value attributes to events to expose
+application-specific details to clients.
+
+The event subscription API in the RPC service currently comprises three methods:
+
+1. `subscribe`: A request to subscribe to the events matching a specific
+ [query expression][query-grammar]. Events can be filtered by their key-value
+ attributes, including custom attributes provided by the application.
+
+2. `unsubscribe`: A request to cancel an existing subscription based on its
+ query expression.
+
+3. `unsubscribe_all`: A request to cancel all existing subscriptions belonging
+ to the client.
+
+There are some important technical and UX issues with the current RPC event
+subscription API. The rest of this ADR outlines these problems in detail, and
+proposes a new API scheme intended to address them.
+
+### Issue 1: Persistent connections
+
+To subscribe to a node's event stream, a client needs a persistent connection
+to the node. Unlike the other methods of the service, for which each call is
+serviced by a short-lived HTTP round trip, subscription delivers a continuous
+stream of events to the client by hijacking the HTTP channel for a websocket.
+The stream (and hence the HTTP request) persists until either the subscription
+is explicitly cancelled, or the connection is closed.
+
+There are several problems with this API:
+
+1. **Expensive per-connection state**: The server must maintain a substantial
+ amount of state per subscriber client:
+
+ - The current implementation uses a [WebSocket][ws] for each active
+ subscriber. The connection must be maintained even if there are no
+ matching events for a given client.
+
+ The server can drop idle connections to save resources, but doing so
+ terminates all subscriptions on those connections and forces those clients
+ to re-connect, adding additional resource churn for the server.
+
+ - In addition, the server maintains a separate buffer of undelivered events
+ for each client. This is to reduce the dual risks that a client will miss
+ events, and that a slow client could "push back" on the publisher,
+ impeding the progress of consensus.
+
+ Because event traffic is quite bursty, queues can potentially take up a
+ lot of memory. Moreover, each subscriber may have a different filter
+ query, so the server winds up having to duplicate the same events among
+ multiple subscriber queues. Not only does this add memory pressure, but it
+ does so most at the worst possible time, i.e., when the server is already
+ under load from high event traffic.
+
+2. **Operational access control is difficult**: The server's websocket
+ interface exposes _all_ the RPC service endpoints, not only the subscription
+ methods. This includes methods that allow callers to inject arbitrary
+ transactions (`broadcast_tx_*`) and evidence (`broadcast_evidence`) into the
+ network, remove transactions (`remove_tx`), and request arbitrary amounts of
+ chain state.
+
+ Filtering requests to the GET endpoint is straightforward: A reverse proxy
+ like [nginx][nginx] can easily filter methods by URL path. Filtering POST
+ requests takes a bit more work, but can be managed with a filter program
+ that speaks [FastCGI][fcgi] and parses JSON-RPC request bodies.
+
+ Filtering the websocket interface requires a dedicated proxy implementation.
+ Although nginx can [reverse-proxy websockets][rp-ws], it does not support
+ filtering websocket traffic via FastCGI. The operator would need to either
+ implement a custom [nginx extension module][ng-xm] or build and run a
+ standalone proxy that implements websocket and filters each session. Apart
+ from the work, this also makes the system even more resource intensive, as
+ well as introducing yet another connection that could potentially time out
+ or stall on full buffers.
+
+ Even for the simple case of restricting access to only event subscription,
+ there is no easy solution currently: Once a caller has access to the
+ websocket endpoint, it has complete access to the RPC service.
+
+### Issue 2: Inconvenient client API
+
+The subscription interface has some inconvenient features for the client as
+well as the server. These include:
+
+1. **Non-standard protocol:** The RPC service is mostly [JSON-RPC 2.0][jsonrpc2],
+ but the subscription interface diverges from the standard.
+
+ In a standard JSON-RPC 2.0 call, the client initiates a request to the
+ server with a unique ID, and the server concludes the call by sending a
+ reply for that ID. The `subscribe` implementation, however, sends multiple
+ responses to the client's request:
+
+ - The client sends `subscribe` with some ID `x` and the desired query
+
+ - The server responds with ID `x` and an empty confirmation response.
+
+ - The server then (repeatedly) sends event result responses with ID `x`, one
+ for each item with a matching event.
+
+ Standard JSON-RPC clients will reject the subsequent replies, as they
+ announce a request ID (`x`) that is already complete. This means a caller
+ has to implement Tendermint-specific handling for these responses.
+
+ Moreover, the result format is different between the initial confirmation
+ and the subsequent responses. This means a caller has to implement special
+ logic for decoding the first response versus the subsequent ones.
+
+2. **No way to detect data loss:** The subscriber connection can be terminated
+ for many reasons. Even ignoring ordinary network issues (e.g., packet loss):
+
+ - The server will drop messages and/or close the websocket if its write
+ buffer fills, or if the queue of undelivered matching events is not
+ drained fast enough. The client has no way to discover that messages were
+ dropped even if the connection remains open.
+
+ - Either the client or the server may close the websocket if the websocket
+ PING and PONG exchanges are not handled correctly, or frequently enough.
+ Even if correctly implemented, this may fail if the system is under high
+ load and cannot service those control messages in a timely manner.
+
+ When the connection is terminated, the server drops all the subscriptions
+ for that client (as if it had called `unsubscribe_all`). Even if the client
+ reconnects, any events that were published during the period between the
+ disconnect and re-connect and re-subscription will be silently lost, and the
+ client has no way to discover that it missed some relevant messages.
+
+3. **No way to replay old events:** Even if a client knew it had missed some
+ events (due to a disconnection, for example), the API provides no way for
+ the client to "play back" events it may have missed.
+
+4. **Large response sizes:** Some event data can be quite large, and there can
+ be substantial duplication across items. The API allows the client to select
+ _which_ events are reported, but has no way to control which parts of a
+ matching event it wishes to receive.
+
+ This can be costly on the server (which has to marshal those data into
+ JSON), the network, and the client (which has to unmarshal the result and
+ then pick through for the components that are relevant to it).
+
+ Besides being inefficient, this also contributes to some of the persistent
+ connection issues mentioned above, e.g., filling up the websocket write
+ buffer and forcing the server to queue potentially several copies of a large
+ value in memory.
+
+5. **Client identity is tied to network address:** The Tendermint event API
+ identifies each subscriber by a (Client ID, Query) pair. In the RPC service,
+ the query is provided by the client, but the client ID is set to the TCP
+ address of the client (typically "host:port" or "ip:port").
+
+ This means that even if the server did _not_ drop subscriptions immediately
+ when the websocket connection is closed, a client may not be able to
+ reattach to its existing subscription. Dialing a new connection is likely
+ to result in a different port (and, depending on their own proxy setup,
+ possibly a different public IP).
+
+ In isolation, this problem would be easy to work around with a new
+ subscription parameter, but it would require several other changes to the
+ handling of event subscriptions for that workaround to become useful.
+
+---
+## Decision
+
+To address the described problems, we will:
+
+1. Introduce a new API for event subscription to the Tendermint RPC service.
+ The proposed API is described in [Detailed Design](#detailed-design) below.
+
+2. This new API will target the Tendermint v0.36 release, during which the
+ current ("streaming") API will remain available as-is, but deprecated.
+
+3. The streaming API will be entirely removed in release v0.37, which will
+ require all users of event subscription to switch to the new API.
+
+> **Point for discussion:** Given that ABCI++ and PBTS are the main priorities
+> for v0.36, it would be fine to slip the first phase of this work to v0.37.
+> Unless there is a time problem, however, the proposed design does not disrupt
+> the work on ABCI++ or PBTS, and will not increase the scope of breaking
+> changes. Therefore the plan is to begin in v0.36 and slip only if necessary.
+
+---
+## Detailed Design
+
+### Design Goals
+
+Specific goals of this design include:
+
+1. Remove the need for a persistent connection to each subscription client.
+ Subscribers should use the same HTTP request flow for event subscription
+ requests as for other RPC calls.
+
+2. The server retains minimal state (possibly none) per-subscriber. In
+ particular:
+
+ - The server does not buffer unconsumed writes nor queue undelivered events
+ on a per-client basis.
+ - A client that stalls or goes idle does not cost the server any resources.
+ - Any event data that is buffered or stored is shared among _all_
+ subscribers, and is not duplicated per client.
+
+3. Slow clients have no impact (or minimal impact) on the rate of progress of
+ the consensus algorithm, beyond the ambient overhead of servicing individual
+ RPC requests.
+
+4. Clients can tell when they have missed events matching their subscription,
+ within some reasonable (configurable) window of time, and can "replay"
+ events within that window to catch up.
+
+5. Nice to have: It should be easy to use the event subscription API from
+ existing standard tools and libraries, including command-line use for
+ testing and experimentation.
+
+### Definitions
+
+- The **event stream** of a node is a single, time-ordered, heterogeneous
+ stream of event items.
+
+- Each **event item** comprises an **event datum** (for example, block header
+ metadata for a new-block event), and zero or more optional **events**.
+
+- An **event** means the [ABCI `Event` data type][abci-event], which comprises
+ a string type and zero or more string key-value **event attributes**.
+
+ The use of the new terms "event item" and "event datum" is to avert confusion
+ between the values that are published to the event bus (what we call here
+ "event items") and the ABCI `Event` data type.
+
+- The node assigns each event item a unique identifier string called a
+ **cursor**. A cursor must be unique among all events published by a single
+ node, but it is not required to be unique globally across nodes.
+
+ Cursors are time-ordered so that given event items A and B, if A was
+ published before B, then cursor(A) < cursor(B) in lexicographic order.
+
+ A minimum viable cursor implementation is a tuple consisting of a timestamp
+ and a sequence number (e.g., `16CCC798FB5F4670-0123`). However, it may also
+ be useful to append basic type information to a cursor, to allow efficient
+ filtering (e.g., `16CCC87E91869050-0091:BeginBlock`).
+
+ The initial implementation will use the minimum viable format.
+
+### Discussion
+
+The node maintains an **event log**, a shared ordered record of the events
+published to its event bus within an operator-configurable time window. The
+initial implementation will store the event log in-memory, and the operator
+will be given two per-node configuration settings. Note, these names are
+provisional:
+
+- `[rpc] event-log-window-size`: A duration before the latest published event,
+ during which the node will retain event items published. Setting this value
+ to zero disables event subscription.
+
+- `[rpc] event-log-max-items`: A maximum number of event items that the node
+ will retain within the time window. If the number of items exceeds this
+ value, the node discardes the oldest items in the window. Setting this value
+ to zero means that no limit is imposed on the number of items.
+
+The node will retain all events within the time window, provided they do not
+exceed the maximum number. These config parameters allow the operator to
+loosely regulate how much memory and storage the node allocates to the event
+log. The client can use the server reply to tell whether the events it wants
+are still available from the event log.
+
+The event log is shared among all subscribers to the node.
+
+> **Discussion point:** Should events persist across node restarts?
+>
+> The current event API does not persist events across restarts, so this new
+> design does not either. Note, however, that we may "spill" older event data
+> to disk as a way of controlling memory use. Such usage is ephemeral, however,
+> and does not need to be tracked as node data (e.g., it could be temp files).
+
+### Query API
+
+To retrieve event data, the client will call the (new) RPC method `events`.
+The parameters of this method will correspond to the following Go types:
+
+```go
+type EventParams struct {
+ // Optional filter spec. If nil or empty, all items are eligible.
+ Filter *Filter `json:"filter"`
+
+ // The maximum number of eligible results to return.
+ // If zero or negative, the server will report a default number.
+ MaxResults int `json:"max_results"`
+
+ // Return only items after this cursor. If empty, the limit is just
+ // before the the beginning of the event log.
+ After string `json:"after"`
+
+ // Return only items before this cursor. If empty, the limit is just
+ // after the head of the event log.
+ Before string `json:"before"`
+
+ // Wait for up to this long for events to be available.
+ WaitTime time.Duration `json:"wait_time"`
+}
+
+type Filter struct {
+ Query string `json:"query"`
+}
+```
+
+> **Discussion point:** The initial implementation will not cache filter
+> queries for the client. If this turns out to be a performance issue in
+> production, the service can keep a small shared cache of compiled queries.
+> Given the improvements from #7319 et seq., this should not be necessary.
+
+> **Discussion point:** For the initial implementation, the new API will use
+> the existing query language as-is. Future work may extend the Filter message
+> with a more structured and/or expressive query surface, but that is beyond
+> the scope of this design.
+
+The semantics of the request are as follows: An item in the event log is
+**eligible** for a query if:
+
+- It is newer than the `after` cursor (if set).
+- It is older than the `before` cursor (if set).
+- It matches the filter (if set).
+
+Among the eligible items in the log, the server returns up to `max_results` of
+the newest items, in reverse order of cursor. If `max_results` is unset the
+server chooses a number to return, and will cap `max_results` at a sensible
+limit.
+
+The `wait_time` parameter is used to effect polling. If `before` is empty and
+no items are available, the server will wait for up to `wait_time` for matching
+items to arrive at the head of the log. If `wait_time` is zero or negative, the
+server will wait for a default (positive) interval.
+
+If `before` non-empty, `wait_time` is ignored: new results are only added to
+the head of the log, so there is no need to wait. This allows the client to
+poll for new data, and "page" backward through matching event items. This is
+discussed in more detail below.
+
+The server will set a sensible cap on the maximum `wait_time`, overriding
+client-requested intervals longer than that.
+
+A successful reply from the `events` request corresponds to the following Go
+types:
+
+```go
+type EventReply struct {
+ // The items matching the request parameters, from newest
+ // to oldest, if any were available within the timeout.
+ Items []*EventItem `json:"items"`
+
+ // This is true if there is at least one older matching item
+ // available in the log that was not returned.
+ More bool `json:"more"`
+
+ // The cursor of the oldest item in the log at the time of this reply,
+ // or "" if the log is empty.
+ Oldest string `json:"oldest"`
+
+ // The cursor of the newest item in the log at the time of this reply,
+ // or "" if the log is empty.
+ Newest string `json:"newest"`
+}
+
+type EventItem struct {
+ // The cursor of this item.
+ Cursor string `json:"cursor"`
+
+ // The encoded event data for this item.
+ // The type identifies the structure of the value.
+ Data struct {
+ Type string `json:"type"`
+ Value json.RawMessage `json:"value"`
+ } `json:"data"`
+}
+```
+
+The `oldest` and `newest` fields of the reply report the cursors of the oldest
+and newest items (of any kind) recorded in the event log at the time of the
+reply, or are `""` if the log is empty.
+
+The `data` field contains the type-specific event datum. The datum carries any
+ABCI events that may have been defined.
+
+> **Discussion point**: Based on [issue #7273][i7273], I did not include a
+> separate field in the response for the ABCI events, since it duplicates data
+> already stored elsewhere in the event data.
+
+The semantics of the reply are as follows:
+
+- If `items` is non-empty:
+
+ - Items are ordered from newest to oldest.
+
+ - If `more` is true, there is at least one additional, older item in the
+ event log that was not returned (in excess of `max_results`).
+
+ In this case the client can fetch the next page by setting `before` in a
+ new request, to the cursor of the oldest item fetched (i.e., the last one
+ in `items`).
+
+ - Otherwise (if `more` is false), all the matching results have been
+ reported (pagination is complete).
+
+ - The first element of `items` identifies the newest item considered.
+ Subsequent poll requests can set `after` to this cursor to skip items
+ that were already retrieved.
+
+- If `items` is empty:
+
+ - If the `before` was set in the request, there are no further eligible
+ items for this query in the log (pagination is complete).
+
+ This is just a safety case; the client can detect this without issuing
+ another call by consulting the `more` field of the previous reply.
+
+ - If the `before` was empty in the request, no eligible items were
+ available before the `wait_time` expired. The client may poll again to
+ wait for more event items.
+
+A client can store cursor values to detect data loss and to recover from
+crashes and connectivity issues:
+
+- After a crash, the client requests events after the newest cursor it has
+ seen. If the reply indicates that cursor is no longer in range, the client
+ may (conservatively) conclude some event data may have been lost.
+
+- On the other hand, if it _is_ in range, the client can then page back through
+ the results that it missed, and then resume polling. As long as its recovery
+ cursor does not age out before it finishes, the client can be sure it has all
+ the relevant results.
+
+### Other Notes
+
+- The new API supports two general "modes" of operation:
+
+ 1. In ordinary operation, clients will **long-poll** the head of the event
+ log for new events matching their criteria (by setting a `wait_time` and
+ no `before`).
+
+ 2. If there are more events than the client requested, or if the client needs
+ to to read older events to recover from a stall or crash, clients will
+ **page** backward through the event log (by setting `before` and `after`).
+
+- While the new API requires explicit polling by the client, it makes better
+ use of the node's existing HTTP infrastructure (e.g., connection pools).
+ Moreover, the direct implementation is easier to use from standard tools and
+ client libraries for HTTP and JSON-RPC.
+
+ Explicit polling does shift the burden of timeliness to the client. That is
+ arguably preferable, however, given that the RPC service is ancillary to the
+ node's primary goal, viz., consensus. The details of polling can be easily
+ hidden from client applications with simple libraries.
+
+- The format of a cursor is considered opaque to the client. Clients must not
+ parse cursor values, but they may rely on their ordering properties.
+
+- To maintain the event log, the server must prune items outside the time
+ window and in excess of the item limit.
+
+ The initial implementation will do this by checking the tail of the event log
+ after each new item is published. If the number of items in the log exceeds
+ the item limit, it will delete oldest items until the log is under the limit;
+ then discard any older than the time window before the latest.
+
+ To minimize coordination interference between the publisher (the event bus)
+ and the subcribers (the `events` service handlers), the event log will be
+ stored as a persistent linear queue with shared structure (a cons list). A
+ single reader-writer mutex will guard the "head" of the queue where new
+ items are published:
+
+ - **To publish a new item**, the publisher acquires the write lock, conses a
+ new item to the front of the existing queue, and replaces the head pointer
+ with the new item.
+
+ - **To scan the queue**, a reader acquires the read lock, captures the head
+ pointer, and then releases the lock. The rest of its request can be served
+ without holding a lock, since the queue structure will not change.
+
+ When a reader wants to wait, it will yield the lock and wait on a condition
+ that is signaled when the publisher swings the pointer.
+
+ - **To prune the queue**, the publisher (who is the sole writer) will track
+ the queue length and the age of the oldest item separately. When the
+ length and or age exceed the configured bounds, it will construct a new
+ queue spine on the same items, discarding out-of-band values.
+
+ Pruning can be done while the publisher already holds the write lock, or
+ could be done outside the lock entirely: Once the new queue is constructed,
+ the lock can be re-acquired to swing the pointer. This costs some extra
+ allocations for the cons cells, but avoids duplicating any event items.
+ The pruning step is a simple linear scan down the first (up to) max-items
+ elements of the queue, to find the breakpoint of age and length.
+
+ Moreover, the publisher can amortize the cost of pruning by item count, if
+ necessary, by pruning length "more aggressively" than the configuration
+ requires (e.g., reducing to 3/4 of the maximum rather than 1/1).
+
+ The state of the event log before the publisher acquires the lock:
+ ![Before publish and pruning](./img/adr-075-log-before.png)
+
+ After the publisher has added a new item and pruned old ones:
+ ![After publish and pruning](./img/adr-075-log-after.png)
+
+### Migration Plan
+
+This design requires that clients eventually migrate to the new event
+subscription API, but provides a full release cycle with both APIs in place to
+make this burden more tractable. The migration strategy is broadly:
+
+**Phase 1**: Release v0.36.
+
+- Implement the new `events` endpoint, keeping the existing methods as they are.
+- Update the Go clients to support the new `events` endpoint, and handle polling.
+- Update the old endpoints to log annoyingly about their own deprecation.
+- Write tutorials about how to migrate client usage.
+
+At or shortly after release, we should proactively update the Cosmos SDK to use
+the new API, to remove a disincentive to upgrading.
+
+**Phase 2**: Release v0.37
+
+- During development, we should actively seek out any existing users of the
+ streaming event subscription API and help them migrate.
+- Possibly also: Spend some time writing clients for JS, Rust, et al.
+- Release: Delete the old implementation and all the websocket support code.
+
+> **Discussion point**: Even though the plan is to keep the existing service,
+> we might take the opportunity to restrict the websocket endpoint to _only_
+> the event streaming service, removing the other endpoints. To minimize the
+> disruption for users in the v0.36 cycle, I have decided not to do this for
+> the first phase.
+>
+> If we wind up pushing this design into v0.37, however, we should re-evaulate
+> this partial turn-down of the websocket.
+
+### Future Work
+
+- This design does not immediately address the problem of allowing the client
+ to control which data are reported back for event items. That concern is
+ deferred to future work. However, it would be straightforward to extend the
+ filter and/or the request parameters to allow more control.
+
+- The node currently stores a subset of event data (specifically the block and
+ transaction events) for use in reindexing. While these data are redundant
+ with the event log described in this document, they are not sufficient to
+ cover event subscription, as they omit other event types.
+
+ In the future we should investigate consolidating or removing event data from
+ the state store entirely. For now this issue is out of scope for purposes of
+ updating the RPC API. We may be able to piggyback on the database unification
+ plans (see [RFC 001][rfc001]) to store the event log separately, so its
+ pruning policy does not need to be tied to the block and state stores.
+
+- This design reuses the existing filter query language from the old API. In
+ the future we may want to use a more structured and/or expressive query. The
+ Filter object can be extended with more fields as needed to support this.
+
+- Some users have trouble communicating with the RPC service because of
+ configuration problems like improperly-set CORS policies. While this design
+ does not address those issues directly, we might want to revisit how we set
+ policies in the RPC service to make it less susceptible to confusing errors
+ caused by misconfiguration.
+
+---
+## Consequences
+
+- ✅ Reduces the number of transport options for RPC. Supports [RFC 002][rfc002].
+- ️✅ Removes the primary non-standard use of JSON-RPC.
+- ⛔️ Forces clients to migrate to a different API (eventually).
+- ↕️ API requires clients to poll, but this reduces client state on the server.
+- ↕️ We have to maintain both implementations for a whole release, but this
+ gives clients time to migrate.
+
+---
+## Alternative Approaches
+
+The following alternative approaches were considered:
+
+1. **Leave it alone.** Since existing tools mostly already work with the API as
+ it stands today, we could leave it alone and do our best to improve its
+ performance and reliability.
+
+ Based on many issues reported by users and node operators (e.g.,
+ [#3380][i3380], [#6439][i6439], [#6729][i6729], [#7247][i7247]), the
+ problems described here affect even the existing use that works. Investing
+ further incremental effort in the existing API is unlikely to address these
+ issues.
+
+2. **Design a better streaming API.** Instead of polling, we might try to
+ design a better "streaming" API for event subscription.
+
+ A significant advantage of switching away from streaming is to remove the
+ need for persistent connections between the node and subscribers. A new
+ streaming protocol design would lose that advantage, and would still need a
+ way to let clients recover and replay.
+
+ This approach might look better if we decided to use a different protocol
+ for event subscription, say gRPC instead of JSON-RPC. That choice, however,
+ would be just as breaking for existing clients, for marginal benefit.
+ Moreover, this option increases both the complexity and the resource cost on
+ the node implementation.
+
+ Given that resource consumption and complexity are important considerations,
+ this option was not chosen.
+
+3. **Defer to an external event broker.** We might remove the entire event
+ subscription infrastructure from the node, and define an optional interface
+ to allow the node to publish all its events to an external event broker,
+ such as Apache Kafka.
+
+ This has the advantage of greatly simplifying the node, but at a great cost
+ to the node operator: To enable event subscription in this design, the
+ operator has to stand up and maintain a separate process in communion with
+ the node, and configuration changes would have to be coordinated across
+ both.
+
+ Moreover, this approach would be highly disruptive to existing client use,
+ and migration would probably require switching to third-party libraries.
+ Despite the potential benefits for the node itself, the costs to operators
+ and clients seems too large for this to be the best option.
+
+ Publishing to an external event broker might be a worthwhile future project,
+ if there is any demand for it. That decision is out of scope for this design,
+ as it interacts with the design of the indexer as well.
+
+---
+## References
+
+- [RFC 006: Event Subscription][rfc006]
+- [Tendermint RPC service][rpc-service]
+- [Event query grammar][query-grammar]
+- [RFC 6455: The WebSocket protocol][ws]
+- [JSON-RPC 2.0 Specification][jsonrpc2]
+- [Nginx proxy server][nginx]
+ - [Proxying websockets][rp-ws]
+ - [Extension modules][ng-xm]
+- [FastCGI][fcgi]
+- [RFC 001: Storage Engines & Database Layer][rfc001]
+- [RFC 002: Interprocess Communication in Tendermint][rfc002]
+- Issues:
+ - [rpc/client: test that client resubscribes upon disconnect][i3380] (#3380)
+ - [Too high memory usage when creating many events subscriptions][i6439] (#6439)
+ - [Tendermint emits events faster than clients can pull them][i6729] (#6729)
+ - [indexer: unbuffered event subscription slow down the consensus][i7247] (#7247)
+ - [rpc: remove duplication of events when querying][i7273] (#7273)
+
+[rfc006]: https://github.com/tendermint/tendermint/blob/master/docs/rfc/rfc-006-event-subscription.md
+[rpc-service]: https://github.com/tendermint/tendermint/blob/master/rpc/openapi/openapi.yaml
+[query-grammar]: https://pkg.go.dev/github.com/tendermint/tendermint@master/internal/pubsub/query/syntax
+[ws]: https://datatracker.ietf.org/doc/html/rfc6455
+[jsonrpc2]: https://www.jsonrpc.org/specification
+[nginx]: https://nginx.org/en/docs/
+[fcgi]: http://www.mit.edu/~yandros/doc/specs/fcgi-spec.html
+[rp-ws]: https://nginx.org/en/docs/http/websocket.html
+
+[ng-xm]: https://www.nginx.com/resources/wiki/extending/
+[abci-event]: https://pkg.go.dev/github.com/tendermint/tendermint/abci/types#Event
+[rfc001]: https://github.com/tendermint/tendermint/blob/master/docs/rfc/rfc-001-storage-engine.rst
+[rfc002]: https://github.com/tendermint/tendermint/blob/master/docs/rfc/rfc-002-ipc-ecosystem.md
+[i3380]: https://github.com/tendermint/tendermint/issues/3380
+[i6439]: https://github.com/tendermint/tendermint/issues/6439
+[i6729]: https://github.com/tendermint/tendermint/issues/6729
+[i7247]: https://github.com/tendermint/tendermint/issues/7247
+[i7273]: https://github.com/tendermint/tendermint/issues/7273
diff --git a/docs/architecture/adr-076-combine-spec-repo.md b/docs/architecture/adr-076-combine-spec-repo.md
new file mode 100644
index 0000000000..a6365da5b8
--- /dev/null
+++ b/docs/architecture/adr-076-combine-spec-repo.md
@@ -0,0 +1,112 @@
+# ADR 076: Combine Spec and Tendermint Repositories
+
+## Changelog
+
+- 2022-02-04: Initial Draft. (@tychoish)
+
+## Status
+
+Accepted.
+
+## Context
+
+While the specification for Tendermint was originally in the same
+repository as the Go implementation, at some point the specification
+was split from the core repository and maintained separately from the
+implementation. While this makes sense in promoting a conceptual
+separation of specification and implementation, in practice this
+separation was a premature optimization, apparently aimed at supporting
+alternate implementations of Tendermint.
+
+The operational and documentary burden of maintaining a separate
+spec repo has not returned value to justify its cost. There are no active
+projects to develop alternate implementations of Tendermint based on the
+common specification, and having separate repositories creates an ongoing
+burden to coordinate versions, documentation, and releases.
+
+## Decision
+
+The specification repository will be merged back into the Tendermint
+core repository.
+
+Stakeholders including representatives from the maintainers of the
+spec, the Go implementation, and the Tendermint Rust library, agreed
+to merge the repositories in the Tendermint core dev meeting on 27
+January 2022, including @williambanfield @cmwaters @creachadair and
+@thanethomson.
+
+## Alternative Approaches
+
+The main alternative we considered was to keep separate repositories,
+and to introduce a coordinated versioning scheme between the two, so
+that users could figure out which spec versions go with which versions
+of the core implementation.
+
+We decided against this on the grounds that it would further complicate
+the release process for _both_ repositories, without mitigating any of
+the other existing issues.
+
+## Detailed Design
+
+Clone and merge the master branch of the `tendermint/spec` repository
+as a branch of the `tendermint/tendermint`, to ensure the commit history
+of both repositories remains intact.
+
+### Implementation Instructions
+
+1. Within the `tendermint` repository, execute the following commands
+ to add a new branch with the history of the master branch of `spec`:
+
+ ```bash
+ git remote add spec git@github.com:tendermint/spec.git
+ git fetch spec
+ git checkout -b spec-master spec/master
+ mkdir spec
+ git ls-tree -z --name-only HEAD | xargs -0 -I {} git mv {} subdir/
+ git commit -m "spec: organize specification prior to merge"
+ git checkout -b spec-merge-mainline origin/master
+ git merge --allow-unrelated-histories spec-master
+ ```
+
+ This merges the spec into the `tendermint/tendermint` repository as
+ a normal branch. This commit can also be backported to the 0.35
+ branch, if needed.
+
+2. Migrate outstanding issues from `tendermint/spec` to the
+ `tendermint/tendermint` repository.
+
+3. In the specification repository, add redirect to the README and mark
+ the repository as archived.
+
+
+## Consequences
+
+### Positive
+
+Easier maintenance for the specification will obviate a number of
+complicated and annoying versioning problems, and will help prevent the
+possibility of the specification and the implementation drifting apart.
+
+Additionally, co-locating the specification will help encourage
+cross-pollination and collaboration, between engineers focusing on the
+specification and the protocol and engineers focusing on the implementation.
+
+### Negative
+
+Co-locating the spec and Go implementation has the potential effect of
+prioritizing the Go implementation with regards to the spec, and
+making it difficult to think about alternate implementations of the
+Tendermint algorithm. Although we may want to foster additional
+Tendermint implementations in the future, this isn't an active goal
+in our current roadmap, and *not* merging these repos doesn't
+change the fact that the Go implementation of Tendermint is already the
+primary implementation.
+
+### Neutral
+
+N/A
+
+## References
+
+- https://github.com/tendermint/spec
+- https://github.com/tendermint/tendermint
diff --git a/docs/architecture/adr-077-block-retention.md b/docs/architecture/adr-077-block-retention.md
new file mode 100644
index 0000000000..714b4810af
--- /dev/null
+++ b/docs/architecture/adr-077-block-retention.md
@@ -0,0 +1,109 @@
+# ADR 077: Configurable Block Retention
+
+## Changelog
+
+- 2020-03-23: Initial draft (@erikgrinaker)
+- 2020-03-25: Use local config for snapshot interval (@erikgrinaker)
+- 2020-03-31: Use ABCI commit response for block retention hint
+- 2020-04-02: Resolved open questions
+- 2021-02-11: Migrate to tendermint repo (Originally [RFC 001](https://github.com/tendermint/spec/pull/84))
+
+## Author(s)
+
+- Erik Grinaker (@erikgrinaker)
+
+## Context
+
+Currently, all Tendermint nodes contain the complete sequence of blocks from genesis up to some height (typically the latest chain height). This will no longer be true when the following features are released:
+
+- [Block pruning](https://github.com/tendermint/tendermint/issues/3652): removes historical blocks and associated data (e.g. validator sets) up to some height, keeping only the most recent blocks.
+
+- [State sync](https://github.com/tendermint/tendermint/issues/828): bootstraps a new node by syncing state machine snapshots at a given height, but not historical blocks and associated data.
+
+To maintain the integrity of the chain, the use of these features must be coordinated such that necessary historical blocks will not become unavailable or lost forever. In particular:
+
+- Some nodes should have complete block histories, for auditability, querying, and bootstrapping.
+
+- The majority of nodes should retain blocks longer than the Cosmos SDK unbonding period, for light client verification.
+
+- Some nodes must take and serve state sync snapshots with snapshot intervals less than the block retention periods, to allow new nodes to state sync and then replay blocks to catch up.
+
+- Applications may not persist their state on commit, and require block replay on restart.
+
+- Only a minority of nodes can be state synced within the unbonding period, for light client verification and to serve block histories for catch-up.
+
+However, it is unclear if and how we should enforce this. It may not be possible to technically enforce all of these without knowing the state of the entire network, but it may also be unrealistic to expect this to be enforced entirely through social coordination. This is especially unfortunate since the consequences of misconfiguration can be permanent chain-wide data loss.
+
+## Proposal
+
+Add a new field `retain_height` to the ABCI `ResponseCommit` message:
+
+```proto
+service ABCIApplication {
+ rpc Commit(RequestCommit) returns (ResponseCommit);
+}
+
+message RequestCommit {}
+
+message ResponseCommit {
+ // reserve 1
+ bytes data = 2; // the Merkle root hash
+ uint64 retain_height = 3; // the oldest block height to retain
+}
+```
+
+Upon ABCI `Commit`, which finalizes execution of a block in the state machine, Tendermint removes all data for heights lower than `retain_height`. This allows the state machine to control block retention, which is preferable since only it can determine the significance of historical blocks. By default (i.e. with `retain_height=0`) all historical blocks are retained.
+
+Removed data includes not only blocks, but also headers, commit info, consensus params, validator sets, and so on. In the first iteration this will be done synchronously, since the number of heights removed for each run is assumed to be small (often 1) in the typical case. It can be made asynchronous at a later time if this is shown to be necessary.
+
+Since `retain_height` is dynamic, it is possible for it to refer to a height which has already been removed. For example, commit at height 100 may return `retain_height=90` while commit at height 101 may return `retain_height=80`. This is allowed, and will be ignored - it is the application's responsibility to return appropriate values.
+
+State sync will eventually support backfilling heights, via e.g. a snapshot metadata field `backfill_height`, but in the initial version it will have a fully truncated block history.
+
+## Cosmos SDK Example
+
+As an example, we'll consider how the Cosmos SDK might make use of this. The specific details should be discussed in a separate SDK proposal.
+
+The returned `retain_height` would be the lowest height that satisfies:
+
+- Unbonding time: the time interval in which validators can be economically punished for misbehavior. Blocks in this interval must be auditable e.g. by the light client.
+
+- IAVL snapshot interval: the block interval at which the underlying IAVL database is persisted to disk, e.g. every 10000 heights. Blocks since the last IAVL snapshot must be available for replay on application restart.
+
+- State sync snapshots: blocks since the _oldest_ available snapshot must be available for state sync nodes to catch up (oldest because a node may be restoring an old snapshot while a new snapshot was taken).
+
+- Local config: archive nodes may want to retain more or all blocks, e.g. via a local config option `min-retain-blocks`. There may also be a need to vary rentention for other nodes, e.g. sentry nodes which do not need historical blocks.
+
+![Cosmos SDK block retention diagram](img/block-retention.png)
+
+## Status
+
+Accepted
+
+## Consequences
+
+### Positive
+
+- Application-specified block retention allows the application to take all relevant factors into account and prevent necessary blocks from being accidentally removed.
+
+- Node operators can independently decide whether they want to provide complete block histories (if local configuration for this is provided) and snapshots.
+
+### Negative
+
+- Social coordination is required to run archival nodes, failure to do so may lead to permanent loss of historical blocks.
+
+- Social coordination is required to run snapshot nodes, failure to do so may lead to inability to run state sync, and inability to bootstrap new nodes at all if no archival nodes are online.
+
+### Neutral
+
+- Reduced block retention requires application changes, and cannot be controlled directly in Tendermint.
+
+- Application-specified block retention may set a lower bound on disk space requirements for all nodes.
+
+## References
+
+- State sync ADR:
+
+- State sync issue:
+
+- Block pruning issue:
diff --git a/docs/architecture/adr-078-nonzero-genesis.md b/docs/architecture/adr-078-nonzero-genesis.md
new file mode 100644
index 0000000000..bd9c030f0a
--- /dev/null
+++ b/docs/architecture/adr-078-nonzero-genesis.md
@@ -0,0 +1,82 @@
+# ADR 078: Non-Zero Genesis
+
+## Changelog
+
+- 2020-07-26: Initial draft (@erikgrinaker)
+- 2020-07-28: Use weak chain linking, i.e. `predecessor` field (@erikgrinaker)
+- 2020-07-31: Drop chain linking (@erikgrinaker)
+- 2020-08-03: Add `State.InitialHeight` (@erikgrinaker)
+- 2021-02-11: Migrate to tendermint repo (Originally [RFC 002](https://github.com/tendermint/spec/pull/119))
+
+## Author(s)
+
+- Erik Grinaker (@erikgrinaker)
+
+## Context
+
+The recommended upgrade path for block protocol-breaking upgrades is currently to hard fork the
+chain (see e.g. [`cosmoshub-3` upgrade](https://blog.cosmos.network/cosmos-hub-3-upgrade-announcement-39c9da941aee).
+This is done by halting all validators at a predetermined height, exporting the application
+state via application-specific tooling, and creating an entirely new chain using the exported
+application state.
+
+As far as Tendermint is concerned, the upgraded chain is a completely separate chain, with e.g.
+a new chain ID and genesis file. Notably, the new chain starts at height 1, and has none of the
+old chain's block history. This causes problems for integrators, e.g. coin exchanges and
+wallets, that assume a monotonically increasing height for a given blockchain. Users also find
+it confusing that a given height can now refer to distinct states depending on the chain
+version.
+
+An ideal solution would be to always retain block backwards compatibility in such a way that chain
+history is never lost on upgrades. However, this may require a significant amount of engineering
+work that is not viable for the planned Stargate release (Tendermint 0.34), and may prove too
+restrictive for future development.
+
+As a first step, allowing the new chain to start from an initial height specified in the genesis
+file would at least provide monotonically increasing heights. There was a proposal to include the
+last block header of the previous chain as well, but since the genesis file is not verified and
+hashed (only specific fields are) this would not be trustworthy.
+
+External tooling will be required to map historical heights onto e.g. archive nodes that contain
+blocks from previous chain version. Tendermint will not include any such functionality.
+
+## Proposal
+
+Tendermint will allow chains to start from an arbitrary initial height:
+
+- A new field `initial_height` is added to the genesis file, defaulting to `1`. It can be set to any
+non-negative integer, and `0` is considered equivalent to `1`.
+
+- A new field `InitialHeight` is added to the ABCI `RequestInitChain` message, with the same value
+and semantics as the genesis field.
+
+- A new field `InitialHeight` is added to the `state.State` struct, where `0` is considered invalid.
+ Including the field here simplifies implementation, since the genesis value does not have to be
+ propagated throughout the code base separately, but it is not strictly necessary.
+
+ABCI applications may have to be updated to handle arbitrary initial heights, otherwise the initial
+block may fail.
+
+## Status
+
+Accepted
+
+## Consequences
+
+### Positive
+
+- Heights can be unique throughout the history of a "logical" chain, across hard fork upgrades.
+
+### Negative
+
+- Upgrades still cause loss of block history.
+
+- Integrators will have to map height ranges to specific archive nodes/networks to query history.
+
+### Neutral
+
+- There is no explicit link to the last block of the previous chain.
+
+## References
+
+- [#2543: Allow genesis file to start from non-zero height w/ prev block header](https://github.com/tendermint/tendermint/issues/2543)
diff --git a/docs/architecture/adr-079-ed25519-verification.md b/docs/architecture/adr-079-ed25519-verification.md
new file mode 100644
index 0000000000..c20869e6c4
--- /dev/null
+++ b/docs/architecture/adr-079-ed25519-verification.md
@@ -0,0 +1,57 @@
+# ADR 079: Ed25519 Verification
+
+## Changelog
+
+- 2020-08-21: Initial RFC
+- 2021-02-11: Migrate RFC to tendermint repo (Originally [RFC 003](https://github.com/tendermint/spec/pull/144))
+
+## Author(s)
+
+- Marko (@marbar3778)
+
+## Context
+
+Ed25519 keys are the only supported key types for Tendermint validators currently. Tendermint-Go wraps the ed25519 key implementation from the go standard library. As more clients are implemented to communicate with the canonical Tendermint implementation (Tendermint-Go) different implementations of ed25519 will be used. Due to [RFC 8032](https://www.rfc-editor.org/rfc/rfc8032.html) not guaranteeing implementation compatibility, Tendermint clients must to come to an agreement of how to guarantee implementation compatibility. [Zcash](https://z.cash/) has multiple implementations of their client and have identified this as a problem as well. The team at Zcash has made a proposal to address this issue, [Zcash improvement proposal 215](https://zips.z.cash/zip-0215).
+
+## Proposal
+
+- Tendermint-Go would adopt [hdevalence/ed25519consensus](https://github.com/hdevalence/ed25519consensus).
+ - This library is implements `ed25519.Verify()` in accordance to zip-215. Tendermint-go will continue to use `crypto/ed25519` for signing and key generation.
+
+- Tendermint-rs would adopt [ed25519-zebra](https://github.com/ZcashFoundation/ed25519-zebra)
+ - related [issue](https://github.com/informalsystems/tendermint-rs/issues/355)
+
+Signature verification is one of the major bottlenecks of Tendermint-go, batch verification can not be used unless it has the same consensus rules, ZIP 215 makes verification safe in consensus critical areas.
+
+This change constitutes a breaking changes, therefore must be done in a major release. No changes to validator keys or operations will be needed for this change to be enabled.
+
+This change has no impact on signature aggregation. To enable this signature aggregation Tendermint will have to use different signature schema (Schnorr, BLS, ...). Secondly, this change will enable safe batch verification for the Tendermint-Go client. Batch verification for the rust client is already supported in the library being used.
+
+As part of the acceptance of this proposal it would be best to contract or discuss with a third party the process of conducting a security review of the go library.
+
+## Status
+
+Proposed
+
+## Consequences
+
+### Positive
+
+- Consistent signature verification across implementations
+- Enable safe batch verification
+
+### Negative
+
+#### Tendermint-Go
+
+- Third_party dependency
+ - library has not gone through a security review.
+ - unclear maintenance schedule
+- Fragmentation of the ed25519 key for the go implementation, verification is done using a third party library while the rest
+ uses the go standard library
+
+### Neutral
+
+## References
+
+[It’s 255:19AM. Do you know what your validation criteria are?](https://hdevalence.ca/blog/2020-10-04-its-25519am)
diff --git a/docs/architecture/adr-080-reverse-sync.md b/docs/architecture/adr-080-reverse-sync.md
new file mode 100644
index 0000000000..57d747fc8d
--- /dev/null
+++ b/docs/architecture/adr-080-reverse-sync.md
@@ -0,0 +1,203 @@
+# ADR 080: ReverseSync - fetching historical data
+
+## Changelog
+
+- 2021-02-11: Migrate to tendermint repo (Originally [RFC 005](https://github.com/tendermint/spec/pull/224))
+- 2021-04-19: Use P2P to gossip necessary data for reverse sync.
+- 2021-03-03: Simplify proposal to the state sync case.
+- 2021-02-17: Add notes on asynchronicity of processes.
+- 2020-12-10: Rename backfill blocks to reverse sync.
+- 2020-11-25: Initial draft.
+
+## Author(s)
+
+- Callum Waters (@cmwaters)
+
+## Context
+
+Two new features: [Block pruning](https://github.com/tendermint/tendermint/issues/3652)
+and [State sync](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-042-state-sync.md)
+meant nodes no longer needed a complete history of the blockchain. This
+introduced some challenges of its own which were covered and subsequently
+tackled with [RFC-001](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-077-block-retention.md).
+The RFC allowed applications to set a block retention height; an upper bound on
+what blocks would be pruned. However nodes who state sync past this upper bound
+(which is necessary as snapshots must be saved within the trusting period for
+the assisting light client to verify) have no means of backfilling the blocks
+to meet the retention limit. This could be a problem as nodes who state sync and
+then eventually switch to consensus (or fast sync) may not have the block and
+validator history to verify evidence causing them to panic if they see 2/3
+commit on what the node believes to be an invalid block.
+
+Thus, this RFC sets out to instil a minimum block history invariant amongst
+honest nodes.
+
+## Proposal
+
+A backfill mechanism can simply be defined as an algorithm for fetching,
+verifying and storing, headers and validator sets of a height prior to the
+current base of the node's blockchain. In matching the terminology used for
+other data retrieving protocols (i.e. fast sync and state sync), we
+call this method **ReverseSync**.
+
+We will define the mechanism in four sections:
+
+- Usage
+- Design
+- Verification
+- Termination
+
+### Usage
+
+For now, we focus purely on the case of a state syncing node, whom after
+syncing to a height will need to verify historical data in order to be capable
+of processing new blocks. We can denote the earliest height that the node will
+need to verify and store in order to be able to verify any evidence that might
+arise as the `max_historical_height`/`time`. Both height and time are necessary
+as this maps to the BFT time used for evidence expiration. After acquiring
+`State`, we calculate these parameters as:
+
+```go
+max_historical_height = max(state.InitialHeight, state.LastBlockHeight - state.ConsensusParams.EvidenceAgeHeight)
+max_historical_time = max(GenesisTime, state.LastBlockTime.Sub(state.ConsensusParams.EvidenceAgeTime))
+```
+
+Before starting either fast sync or consensus, we then run the following
+synchronous process:
+
+```go
+func ReverseSync(max_historical_height int64, max_historical_time time.Time) error
+```
+
+Where we fetch and verify blocks until a block `A` where
+`A.Height <= max_historical_height` and `A.Time <= max_historical_time`.
+
+Upon successfully reverse syncing, a node can now safely continue. As this
+feature is only used as part of state sync, one can think of this as merely an
+extension to it.
+
+In the future we may want to extend this functionality to allow nodes to fetch
+historical blocks for reasons of accountability or data accessibility.
+
+### Design
+
+This section will provide a high level overview of some of the more important
+characteristics of the design, saving the more tedious details as an ADR.
+
+#### P2P
+
+Implementation of this RFC will require the addition of a new channel and two
+new messages.
+
+```proto
+message LightBlockRequest {
+ uint64 height = 1;
+}
+```
+
+```proto
+message LightBlockResponse {
+ Header header = 1;
+ Commit commit = 2;
+ ValidatorSet validator_set = 3;
+}
+```
+
+The P2P path may also enable P2P networked light clients and a state sync that
+also doesn't need to rely on RPC.
+
+### Verification
+
+ReverseSync is used to fetch the following data structures:
+
+- `Header`
+- `Commit`
+- `ValidatorSet`
+
+Nodes will also need to be able to verify these. This can be achieved by first
+retrieving the header at the base height from the block store. From this trusted
+header, the node hashes each of the three data structures and checks that they are correct.
+
+1. The trusted header's last block ID matches the hash of the new header
+
+ ```go
+ header[height].LastBlockID == hash(header[height-1])
+ ```
+
+2. The trusted header's last commit hash matches the hash of the new commit
+
+ ```go
+ header[height].LastCommitHash == hash(commit[height-1])
+ ```
+
+3. Given that the node now trusts the new header, check that the header's validator set
+ hash matches the hash of the validator set
+
+ ```go
+ header[height-1].ValidatorsHash == hash(validatorSet[height-1])
+ ```
+
+### Termination
+
+ReverseSync draws a lot of parallels with fast sync. An important consideration
+for fast sync that also extends to ReverseSync is termination. ReverseSync will
+finish it's task when one of the following conditions have been met:
+
+1. It reaches a block `A` where `A.Height <= max_historical_height` and
+`A.Time <= max_historical_time`.
+2. None of it's peers reports to have the block at the height below the
+processes current block.
+3. A global timeout.
+
+This implies that we can't guarantee adequate history and thus the term
+"invariant" can't be used in the strictest sense. In the case that the first
+condition isn't met, the node will log an error and optimistically attempt
+to continue with either fast sync or consensus.
+
+## Alternative Solutions
+
+The need for a minimum block history invariant stems purely from the need to
+validate evidence (although there may be some application relevant needs as
+well). Because of this, an alternative, could be to simply trust whatever the
+2/3+ majority has agreed upon and in the case where a node is at the head of the
+blockchain, you simply abstain from voting.
+
+As it stands, if 2/3+ vote on evidence you can't verify, in the same manner if
+2/3+ vote on a header that a node sees as invalid (perhaps due to a different
+app hash), the node will halt.
+
+Another alternative is the method with which the relevant data is retrieved.
+Instead of introducing new messages to the P2P layer, RPC could have been used
+instead.
+
+The aforementioned data is already available via the following RPC endpoints:
+`/commit` for `Header`'s' and `/validators` for `ValidatorSet`'s'. It was
+decided predominantly due to the instability of the current RPC infrastructure
+that P2P be used instead.
+
+## Status
+
+Proposed
+
+## Consequences
+
+### Positive
+
+- Ensures a minimum block history invariant for honest nodes. This will allow
+ nodes to verify evidence.
+
+### Negative
+
+- Statesync will be slower as more processing is required.
+
+### Neutral
+
+- By having validator sets served through p2p, this would make it easier to
+extend p2p support to light clients and state sync.
+- In the future, it may also be possible to extend this feature to allow for
+nodes to freely fetch and verify prior blocks
+
+## References
+
+- [RFC-001: Block retention](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-077-block-retention.md)
+- [Original issue](https://github.com/tendermint/tendermint/issues/4629)
diff --git a/docs/architecture/adr-081-protobuf-mgmt.md b/docs/architecture/adr-081-protobuf-mgmt.md
new file mode 100644
index 0000000000..1199cff1b4
--- /dev/null
+++ b/docs/architecture/adr-081-protobuf-mgmt.md
@@ -0,0 +1,201 @@
+# ADR 081: Protocol Buffers Management
+
+## Changelog
+
+- 2022-02-28: First draft
+
+## Status
+
+Accepted
+
+[Tracking issue](https://github.com/tendermint/tendermint/issues/8121)
+
+## Context
+
+At present, we manage the [Protocol Buffers] schema files ("protos") that define
+our wire-level data formats within the Tendermint repository itself (see the
+[`proto`](../../proto/) directory). Recently, we have been making use of [Buf],
+both locally and in CI, in order to generate Go stubs, and lint and check
+`.proto` files for breaking changes.
+
+The version of Buf used at the time of this decision was `v1beta1`, and it was
+discussed in [\#7975] and in weekly calls as to whether we should upgrade to
+`v1` and harmonize our approach with that used by the Cosmos SDK. The team
+managing the Cosmos SDK was primarily interested in having our protos versioned
+and easily accessible from the [Buf] registry.
+
+The three main sets of stakeholders for the `.proto` files and their needs, as
+currently understood, are as follows.
+
+1. Tendermint needs Go code generated from `.proto` files.
+2. Consumers of Tendermint's `.proto` files, specifically projects that want to
+ interoperate with Tendermint and need to generate code for their own
+ programming language, want to be able to access these files in a reliable and
+ efficient way.
+3. The Tendermint Core team wants to provide stable interfaces that are as easy
+ as possible to maintain, on which consumers can depend, and to be able to
+ notify those consumers promptly when those interfaces change. To this end, we
+ want to:
+ 1. Prevent any breaking changes from being introduced in minor/patch releases
+ of Tendermint. Only major version updates should be able to contain
+ breaking interface changes.
+ 2. Prevent generated code from diverging from the Protobuf schema files.
+
+There was also discussion surrounding the notion of automated documentation
+generation and hosting, but it is not clear at this time whether this would be
+that valuable to any of our stakeholders. What will, of course, be valuable at
+minimum would be better documentation (in comments) of the `.proto` files
+themselves.
+
+## Alternative Approaches
+
+### Meeting stakeholders' needs
+
+1. Go stub generation from protos. We could use:
+ 1. [Buf]. This approach has been rather cumbersome up to this point, and it
+ is not clear what Buf really provides beyond that which `protoc` provides
+ to justify the additional complexity in configuring Buf for stub
+ generation.
+ 2. [protoc] - the Protocol Buffers compiler.
+2. Notification of breaking changes:
+ 1. Buf in CI for all pull requests to *release* branches only (and not on
+ `master`).
+ 2. Buf in CI on every pull request to every branch (this was the case at the
+ time of this decision, and the team decided that the signal-to-noise ratio
+ for this approach was too low to be of value).
+3. `.proto` linting:
+ 1. Buf in CI on every pull request
+4. `.proto` formatting:
+ 1. [clang-format] locally and a [clang-format GitHub Action] in CI to check
+ that files are formatted properly on every pull request.
+5. Sharing of `.proto` files in a versioned, reliable manner:
+ 1. Consumers could simply clone the Tendermint repository, check out a
+ specific commit, tag or branch and manually copy out all of the `.proto`
+ files they need. This requires no effort from the Tendermint Core team and
+ will continue to be an option for consumers. The drawback of this approach
+ is that it requires manual coding/scripting to implement and is brittle in
+ the face of bigger changes.
+ 2. Uploading our `.proto` files to Buf's registry on every release. This is
+ by far the most seamless for consumers of our `.proto` files, but requires
+ the dependency on Buf. This has the additional benefit that the Buf
+ registry will automatically [generate and host
+ documentation][buf-docs-gen] for these protos.
+ 3. We could create a process that, upon release, creates a `.zip` file
+ containing our `.proto` files.
+
+### Popular alternatives to Buf
+
+[Prototool] was not considered as it appears deprecated, and the ecosystem seems
+to be converging on Buf at this time.
+
+### Tooling complexity
+
+The more tools we have in our build/CI processes, the more complex and fragile
+repository/CI management becomes, and the longer it takes to onboard new team
+members. Maintainability is a core concern here.
+
+### Buf sustainability and costs
+
+One of the primary considerations regarding the usage of Buf is whether, for
+example, access to its registry will eventually become a
+paid-for/subscription-based service and whether this is valuable enough for us
+and the ecosystem to pay for such a service. At this time, it appears as though
+Buf will never charge for hosting open source projects' protos.
+
+Another consideration was Buf's sustainability as a project - what happens when
+their resources run out? Will there be a strong and broad enough open source
+community to continue maintaining it?
+
+### Local Buf usage options
+
+Local usage of Buf (i.e. not in CI) can be accomplished in two ways:
+
+1. Installing the relevant tools individually.
+2. By way of its [Docker image][buf-docker].
+
+Local installation of Buf requires developers to manually keep their toolchains
+up-to-date. The Docker option comes with a number of complexities, including
+how the file system permissions of code generated by a Docker container differ
+between platforms (e.g. on Linux, Buf-generated code ends up being owned by
+`root`).
+
+The trouble with the Docker-based approach is that we make use of the
+[gogoprotobuf] plugin for `protoc`. Continuing to use the Docker-based approach
+to using Buf will mean that we will have to continue building our own custom
+Docker image with embedded gogoprotobuf.
+
+Along these lines, we could eventually consider coming up with a [Nix]- or
+[redo]-based approach to developer tooling to ensure tooling consistency across
+the team and for anyone who wants to be able to contribute to Tendermint.
+
+## Decision
+
+1. We will adopt Buf for now for proto generation, linting, breakage checking
+ and its registry (mainly in CI, with optional usage locally).
+2. Failing CI when checking for breaking changes in `.proto` files will only
+ happen when performing minor/patch releases.
+3. Local tooling will be favored over Docker-based tooling.
+
+## Detailed Design
+
+We currently aim to:
+
+1. Update to Buf `v1` to facilitate linting, breakage checking and uploading to
+ the Buf registry.
+2. Configure CI appropriately for proto management:
+ 1. Uploading protos to the Buf registry on every release (e.g. the
+ [approach][cosmos-sdk-buf-registry-ci] used by the Cosmos SDK).
+ 2. Linting on every pull request (e.g. the
+ [approach][cosmos-sdk-buf-linting-ci] used by the Cosmos SDK). The linter
+ passing should be considered a requirement for accepting PRs.
+ 3. Checking for breaking changes in minor/patch version releases and failing
+ CI accordingly - see [\#8003].
+ 4. Add [clang-format GitHub Action] to check `.proto` file formatting. Format
+ checking should be considered a requirement for accepting PRs.
+3. Update the Tendermint [`Makefile`](../../Makefile) to primarily facilitate
+ local Protobuf stub generation, linting, formatting and breaking change
+ checking. More specifically:
+ 1. This includes removing the dependency on Docker and introducing the
+ dependency on local toolchain installation. CI-based equivalents, where
+ relevant, will rely on specific GitHub Actions instead of the Makefile.
+ 2. Go code generation will rely on `protoc` directly.
+
+## Consequences
+
+### Positive
+
+- We will still offer Go stub generation, proto linting and breakage checking.
+- Breakage checking will only happen on minor/patch releases to increase the
+ signal-to-noise ratio in CI.
+- Versioned protos will be made available via Buf's registry upon every release.
+
+### Negative
+
+- Developers/contributors will need to install the relevant Protocol
+ Buffers-related tooling (Buf, gogoprotobuf, clang-format) locally in order to
+ build, lint, format and check `.proto` files for breaking changes.
+
+### Neutral
+
+## References
+
+- [Protocol Buffers]
+- [Buf]
+- [\#7975]
+- [protoc] - The Protocol Buffers compiler
+
+[Protocol Buffers]: https://developers.google.com/protocol-buffers
+[Buf]: https://buf.build/
+[\#7975]: https://github.com/tendermint/tendermint/pull/7975
+[protoc]: https://github.com/protocolbuffers/protobuf
+[clang-format]: https://clang.llvm.org/docs/ClangFormat.html
+[clang-format GitHub Action]: https://github.com/marketplace/actions/clang-format-github-action
+[buf-docker]: https://hub.docker.com/r/bufbuild/buf
+[cosmos-sdk-buf-registry-ci]: https://github.com/cosmos/cosmos-sdk/blob/e6571906043b6751951a42b6546431b1c38b05bd/.github/workflows/proto-registry.yml
+[cosmos-sdk-buf-linting-ci]: https://github.com/cosmos/cosmos-sdk/blob/e6571906043b6751951a42b6546431b1c38b05bd/.github/workflows/proto.yml#L15
+[\#8003]: https://github.com/tendermint/tendermint/issues/8003
+[Nix]: https://nixos.org/
+[gogoprotobuf]: https://github.com/gogo/protobuf
+[Prototool]: https://github.com/uber/prototool
+[buf-docs-gen]: https://docs.buf.build/bsr/documentation
+[redo]: https://redo.readthedocs.io/en/latest/
diff --git a/docs/introduction/architecture.md b/docs/introduction/architecture.md
index 3b70e70151..27e1b34c66 100644
--- a/docs/introduction/architecture.md
+++ b/docs/introduction/architecture.md
@@ -61,7 +61,7 @@ Here are some relevant facts about TCP:
![tcp](../imgs/tcp-window.png)
-In order to have performant TCP connections under the conditions created in Tendermint, we've created the `mconnection`, or the multiplexing connection. It is our own protocol built on top of TCP. It lets us reuse TCP connections to minimize overhead, and it keeps the window size high by sending auxiliary messages when necessary.
+In order to have performant TCP connections under the conditions created in Tendermint, we've created the `mconnection`, or the multiplexing connection. It is our own protocol built on top of TCP. It lets us reuse TCP connections to minimize overhead, and it keeps the window size high by sending auxiliary messages when necessary.
The `mconnection` is represented by a struct, which contains a batch of messages, read and write buffers, and a map of channel IDs to reactors. It communicates with TCP via file descriptors, which it can write to. There is one `mconnection` per peer connection.
diff --git a/docs/introduction/what-is-tendermint.md b/docs/introduction/what-is-tendermint.md
index 2386626eac..417152d748 100644
--- a/docs/introduction/what-is-tendermint.md
+++ b/docs/introduction/what-is-tendermint.md
@@ -68,10 +68,10 @@ Tendermint is in essence similar software, but with two key differences:
- It is Byzantine Fault Tolerant, meaning it can only tolerate up to a
1/3 of failures, but those failures can include arbitrary behaviour -
- including hacking and malicious attacks.
-- It does not specify a particular application, like a fancy key-value
- store. Instead, it focuses on arbitrary state machine replication,
- so developers can build the application logic that's right for them,
+ including hacking and malicious attacks.
+- It does not specify a particular application, like a fancy key-value
+ store. Instead, it focuses on arbitrary state machine replication,
+ so developers can build the application logic that's right for them,
from key-value store to cryptocurrency to e-voting platform and beyond.
### Bitcoin, Ethereum, etc
@@ -104,12 +104,10 @@ to Tendermint, but is more opinionated about how the state is managed,
and requires that all application behaviour runs in potentially many
docker containers, modules it calls "chaincode". It uses an
implementation of [PBFT](http://pmg.csail.mit.edu/papers/osdi99.pdf).
-from a team at IBM that is [augmented to handle potentially
-non-deterministic
-chaincode](https://www.zurich.ibm.com/~cca/papers/sieve.pdf) It is
-possible to implement this docker-based behaviour as a ABCI app in
-Tendermint, though extending Tendermint to handle non-determinism
-remains for future work.
+from a team at IBM that is augmented to handle potentially non-deterministic
+chaincode It is possible to implement this docker-based behaviour as a ABCI app
+in Tendermint, though extending Tendermint to handle non-determinism remains
+for future work.
[Burrow](https://github.com/hyperledger/burrow) is an implementation of
the Ethereum Virtual Machine and Ethereum transaction mechanics, with
diff --git a/docs/networks/README.md b/docs/networks/README.md
deleted file mode 100644
index 0b14e391be..0000000000
--- a/docs/networks/README.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-order: 1
-parent:
- title: Networks
- order: 1
----
-
-# Overview
-
-Use [Docker Compose](./docker-compose.md) to spin up Tendermint testnets on your
-local machine.
-
-Use [Terraform and Ansible](./terraform-and-ansible.md) to deploy Tendermint
-testnets to the cloud.
-
-See the `tendermint testnet --help` command for more help initializing testnets.
diff --git a/docs/nodes/README.md b/docs/nodes/README.md
index 9be6febf03..fd9056e0dd 100644
--- a/docs/nodes/README.md
+++ b/docs/nodes/README.md
@@ -1,7 +1,7 @@
---
order: 1
parent:
- title: Nodes
+ title: Node Operators
order: 4
---
diff --git a/docs/nodes/configuration.md b/docs/nodes/configuration.md
index e0cfe501a5..a55bfb63a2 100644
--- a/docs/nodes/configuration.md
+++ b/docs/nodes/configuration.md
@@ -16,7 +16,8 @@ the parameters set with their default values. It will look something
like the file below, however, double check by inspecting the
`config.toml` created with your version of `tendermint` installed:
-```toml# This is a TOML config file.
+```toml
+# This is a TOML config file.
# For more information, see https://github.com/toml-lang/toml
# NOTE: Any path below can be absolute (e.g. "/var/myawesomeapp/data") or
@@ -33,11 +34,10 @@ like the file below, however, double check by inspecting the
proxy-app = "tcp://127.0.0.1:26658"
# A custom human readable name for this node
-moniker = "ape"
-
+moniker = "sidewinder"
-# Mode of Node: full | validator | seed (default: "validator")
-# * validator node (default)
+# Mode of Node: full | validator | seed
+# * validator node
# - all reactors
# - with priv_validator_key.json, priv_validator_state.json
# * full node
@@ -48,11 +48,6 @@ moniker = "ape"
# - No priv_validator_key.json, priv_validator_state.json
mode = "validator"
-# If this node is many blocks behind the tip of the chain, FastSync
-# allows them to catchup quickly by downloading blocks in parallel
-# and verifying their commits
-fast-sync = true
-
# Database backend: goleveldb | cleveldb | boltdb | rocksdb | badgerdb
# * goleveldb (github.com/syndtr/goleveldb - most popular implementation)
# - pure go
@@ -120,10 +115,10 @@ laddr = ""
client-certificate-file = ""
# Client key generated while creating certificates for secure connection
-validator-client-key-file = ""
+client-key-file = ""
# Path to the Root Certificate Authority used to sign both client and server certificates
-certificate-authority = ""
+root-ca-file = ""
#######################################################################
@@ -149,26 +144,10 @@ cors-allowed-methods = ["HEAD", "GET", "POST", ]
# A list of non simple headers the client is allowed to use with cross-domain requests
cors-allowed-headers = ["Origin", "Accept", "Content-Type", "X-Requested-With", "X-Server-Time", ]
-# TCP or UNIX socket address for the gRPC server to listen on
-# NOTE: This server only supports /broadcast_tx_commit
-# Deprecated gRPC in the RPC layer of Tendermint will be deprecated in 0.36.
-grpc-laddr = ""
-
-# Maximum number of simultaneous connections.
-# Does not include RPC (HTTP&WebSocket) connections. See max-open-connections
-# If you want to accept a larger number than the default, make sure
-# you increase your OS limits.
-# 0 - unlimited.
-# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
-# 1024 - 40 - 10 - 50 = 924 = ~900
-# Deprecated gRPC in the RPC layer of Tendermint will be deprecated in 0.36.
-grpc-max-open-connections = 900
-
# Activate unsafe RPC commands like /dial-seeds and /unsafe-flush-mempool
unsafe = false
# Maximum number of simultaneous connections (including WebSocket).
-# Does not include gRPC connections. See grpc-max-open-connections
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
@@ -182,10 +161,37 @@ max-open-connections = 900
max-subscription-clients = 100
# Maximum number of unique queries a given client can /subscribe to
-# If you're using GRPC (or Local RPC client) and /broadcast_tx_commit, set to
-# the estimated # maximum number of broadcast_tx_commit calls per block.
+# If you're using a Local RPC client and /broadcast_tx_commit, set this
+# to the estimated maximum number of broadcast_tx_commit calls per block.
max-subscriptions-per-client = 5
+# If true, disable the websocket interface to the RPC service. This has
+# the effect of disabling the /subscribe, /unsubscribe, and /unsubscribe_all
+# methods for event subscription.
+#
+# EXPERIMENTAL: This setting will be removed in Tendermint v0.37.
+experimental-disable-websocket = false
+
+# The time window size for the event log. All events up to this long before
+# the latest (up to EventLogMaxItems) will be available for subscribers to
+# fetch via the /events method. If 0 (the default) the event log and the
+# /events RPC method are disabled.
+event-log-window-size = "0s"
+
+# The maxiumum number of events that may be retained by the event log. If
+# this value is 0, no upper limit is set. Otherwise, items in excess of
+# this number will be discarded from the event log.
+#
+# Warning: This setting is a safety valve. Setting it too low may cause
+# subscribers to miss events. Try to choose a value higher than the
+# maximum worst-case expected event load within the chosen window size in
+# ordinary operation.
+#
+# For example, if the window size is 10 minutes and the node typically
+# averages 1000 events per ten minutes, but with occasional known spikes of
+# up to 2000, choose a value > 2000.
+event-log-max-items = 0
+
# How long to wait for a tx to be committed during /broadcast_tx_commit.
# WARNING: Using a value larger than 10s will result in increasing the
# global HTTP write timeout, which applies to all connections and endpoints.
@@ -221,9 +227,6 @@ pprof-laddr = ""
#######################################################
[p2p]
-# Enable the legacy p2p layer.
-use-legacy = false
-
# Select the p2p internal queue
queue-type = "priority"
@@ -255,87 +258,48 @@ persistent-peers = ""
# UPNP port forwarding
upnp = false
-# Path to address book
-# TODO: Remove once p2p refactor is complete
-# ref: https:#github.com/tendermint/tendermint/issues/5670
-addr-book-file = "config/addrbook.json"
-
-# Set true for strict address routability rules
-# Set false for private or local networks
-addr-book-strict = true
-
-# Maximum number of inbound peers
-#
-# TODO: Remove once p2p refactor is complete in favor of MaxConnections.
-# ref: https://github.com/tendermint/tendermint/issues/5670
-max-num-inbound-peers = 40
-
-# Maximum number of outbound peers to connect to, excluding persistent peers
-#
-# TODO: Remove once p2p refactor is complete in favor of MaxConnections.
-# ref: https://github.com/tendermint/tendermint/issues/5670
-max-num-outbound-peers = 10
-
# Maximum number of connections (inbound and outbound).
max-connections = 64
# Rate limits the number of incoming connection attempts per IP address.
max-incoming-connection-attempts = 100
-# List of node IDs, to which a connection will be (re)established ignoring any existing limits
-# TODO: Remove once p2p refactor is complete
-# ref: https:#github.com/tendermint/tendermint/issues/5670
-unconditional-peer-ids = ""
+# Set true to enable the peer-exchange reactor
+pex = true
-# Maximum pause when redialing a persistent peer (if zero, exponential backoff is used)
-# TODO: Remove once p2p refactor is complete
-# ref: https:#github.com/tendermint/tendermint/issues/5670
-persistent-peers-max-dial-period = "0s"
+# Comma separated list of peer IDs to keep private (will not be gossiped to other peers)
+# Warning: IPs will be exposed at /net_info, for more information https://github.com/tendermint/tendermint/issues/3055
+private-peer-ids = ""
+
+# Toggle to disable guard against peers connecting from the same ip.
+allow-duplicate-ip = false
+
+# Peer connection configuration.
+handshake-timeout = "20s"
+dial-timeout = "3s"
# Time to wait before flushing messages out on the connection
-# TODO: Remove once p2p refactor is complete
-# ref: https:#github.com/tendermint/tendermint/issues/5670
+# TODO: Remove once MConnConnection is removed.
flush-throttle-timeout = "100ms"
# Maximum size of a message packet payload, in bytes
-# TODO: Remove once p2p refactor is complete
-# ref: https:#github.com/tendermint/tendermint/issues/5670
+# TODO: Remove once MConnConnection is removed.
max-packet-msg-payload-size = 1400
# Rate at which packets can be sent, in bytes/second
-# TODO: Remove once p2p refactor is complete
-# ref: https:#github.com/tendermint/tendermint/issues/5670
+# TODO: Remove once MConnConnection is removed.
send-rate = 5120000
# Rate at which packets can be received, in bytes/second
-# TODO: Remove once p2p refactor is complete
-# ref: https:#github.com/tendermint/tendermint/issues/5670
+# TODO: Remove once MConnConnection is removed.
recv-rate = 5120000
-# Set true to enable the peer-exchange reactor
-pex = true
-
-# Comma separated list of peer IDs to keep private (will not be gossiped to other peers)
-# Warning: IPs will be exposed at /net_info, for more information https://github.com/tendermint/tendermint/issues/3055
-private-peer-ids = ""
-
-# Toggle to disable guard against peers connecting from the same ip.
-allow-duplicate-ip = false
-
-# Peer connection configuration.
-handshake-timeout = "20s"
-dial-timeout = "3s"
#######################################################
### Mempool Configuration Option ###
#######################################################
[mempool]
-# Mempool version to use:
-# 1) "v0" - The legacy non-prioritized mempool reactor.
-# 2) "v1" (default) - The prioritized mempool reactor.
-version = "v1"
-
recheck = true
broadcast = true
@@ -391,22 +355,30 @@ ttl-num-blocks = 0
# starting from the height of the snapshot.
enable = false
-# RPC servers (comma-separated) for light client verification of the synced state machine and
-# retrieval of state data for node bootstrapping. Also needs a trusted height and corresponding
-# header hash obtained from a trusted source, and a period during which validators can be trusted.
-#
-# For Cosmos SDK-based chains, trust-period should usually be about 2/3 of the unbonding time (~2
-# weeks) during which they can be financially punished (slashed) for misbehavior.
+# State sync uses light client verification to verify state. This can be done either through the
+# P2P layer or RPC layer. Set this to true to use the P2P layer. If false (default), RPC layer
+# will be used.
+use-p2p = false
+
+# If using RPC, at least two addresses need to be provided. They should be compatible with net.Dial,
+# for example: "host.example.com:2125"
rpc-servers = ""
+
+# The hash and height of a trusted block. Must be within the trust-period.
trust-height = 0
trust-hash = ""
+
+# The trust period should be set so that Tendermint can detect and gossip misbehavior before
+# it is considered expired. For chains based on the Cosmos SDK, one day less than the unbonding
+# period should suffice.
trust-period = "168h0m0s"
# Time to spend discovering snapshots before initiating a restore.
discovery-time = "15s"
-# Temporary directory for state sync snapshot chunks, defaults to the OS tempdir (typically /tmp).
-# Will create a new, randomly named directory within, and remove it when done.
+# Temporary directory for state sync snapshot chunks, defaults to os.TempDir().
+# The synchronizer will create a new, randomly named directory within this directory
+# and remove it when the sync is complete.
temp-dir = ""
# The timeout duration before re-requesting a chunk, possibly from a different
@@ -416,21 +388,6 @@ chunk-request-timeout = "15s"
# The number of concurrent chunk and block fetchers to run (default: 4).
fetchers = "4"
-#######################################################
-### Block Sync Configuration Connections ###
-#######################################################
-[blocksync]
-
-# If this node is many blocks behind the tip of the chain, BlockSync
-# allows them to catchup quickly by downloading blocks in parallel
-# and verifying their commits
-enable = true
-
-# Block Sync version to use:
-# 1) "v0" (default) - the standard block sync implementation
-# 2) "v2" - DEPRECATED, please use v0
-version = "v0"
-
#######################################################
### Consensus Configuration Options ###
#######################################################
@@ -438,32 +395,12 @@ version = "v0"
wal-file = "data/cs.wal/wal"
-# How long we wait for a proposal block before prevoting nil
-timeout-propose = "3s"
-# How much timeout-propose increases with each round
-timeout-propose-delta = "500ms"
-# How long we wait after receiving +2/3 prevotes for “anything” (ie. not a single block or nil)
-timeout-prevote = "1s"
-# How much the timeout-prevote increases with each round
-timeout-prevote-delta = "500ms"
-# How long we wait after receiving +2/3 precommits for “anything” (ie. not a single block or nil)
-timeout-precommit = "1s"
-# How much the timeout-precommit increases with each round
-timeout-precommit-delta = "500ms"
-# How long we wait after committing a block, before starting on the new
-# height (this gives us a chance to receive some more precommits, even
-# though we already have +2/3).
-timeout-commit = "1s"
-
# How many blocks to look back to check existence of the node's consensus votes before joining consensus
# When non-zero, the node will panic upon restart
# if the same consensus key was used to sign {double-sign-check-height} last blocks.
# So, validators should stop the state machine, wait for some blocks, and then restart the state machine to avoid panic.
double-sign-check-height = 0
-# Make progress as soon as we have all the precommits (as if TimeoutCommit = 0)
-skip-timeout-commit = false
-
# EmptyBlocks mode and possible interval between empty blocks
create-empty-blocks = true
create-empty-blocks-interval = "0s"
@@ -472,6 +409,50 @@ create-empty-blocks-interval = "0s"
peer-gossip-sleep-duration = "100ms"
peer-query-maj23-sleep-duration = "2s"
+### Unsafe Timeout Overrides ###
+
+# These fields provide temporary overrides for the Timeout consensus parameters.
+# Use of these parameters is strongly discouraged. Using these parameters may have serious
+# liveness implications for the validator and for the chain.
+#
+# These fields will be removed from the configuration file in the v0.37 release of Tendermint.
+# For additional information, see ADR-74:
+# https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-074-timeout-params.md
+
+# This field provides an unsafe override of the Propose timeout consensus parameter.
+# This field configures how long the consensus engine will wait for a proposal block before prevoting nil.
+# If this field is set to a value greater than 0, it will take effect.
+# unsafe-propose-timeout-override = 0s
+
+# This field provides an unsafe override of the ProposeDelta timeout consensus parameter.
+# This field configures how much the propose timeout increases with each round.
+# If this field is set to a value greater than 0, it will take effect.
+# unsafe-propose-timeout-delta-override = 0s
+
+# This field provides an unsafe override of the Vote timeout consensus parameter.
+# This field configures how long the consensus engine will wait after
+# receiving +2/3 votes in a around.
+# If this field is set to a value greater than 0, it will take effect.
+# unsafe-vote-timeout-override = 0s
+
+# This field provides an unsafe override of the VoteDelta timeout consensus parameter.
+# This field configures how much the vote timeout increases with each round.
+# If this field is set to a value greater than 0, it will take effect.
+# unsafe-vote-timeout-delta-override = 0s
+
+# This field provides an unsafe override of the Commit timeout consensus parameter.
+# This field configures how long the consensus engine will wait after receiving
+# +2/3 precommits before beginning the next height.
+# If this field is set to a value greater than 0, it will take effect.
+# unsafe-commit-timeout-override = 0s
+
+# This field provides an unsafe override of the BypassCommitTimeout consensus parameter.
+# This field configures if the consensus engine will wait for the full Commit timeout
+# before proceeding to the next height.
+# If this field is set to true, the consensus engine will proceed to the next height
+# as soon as the node has gathered votes from all of the validators on the network.
+# unsafe-bypass-commit-timeout-override =
+
#######################################################
### Transaction Indexer Configuration Options ###
#######################################################
@@ -546,46 +527,6 @@ transactions every `create-empty-blocks-interval`. For instance, with
Tendermint will only create blocks if there are transactions, or after waiting
30 seconds without receiving any transactions.
-## Consensus timeouts explained
-
-There's a variety of information about timeouts in [Running in
-production](../tendermint-core/running-in-production.md)
-
-You can also find more detailed technical explanation in the spec: [The latest
-gossip on BFT consensus](https://arxiv.org/abs/1807.04938).
-
-```toml
-[consensus]
-...
-
-timeout-propose = "3s"
-timeout-propose-delta = "500ms"
-timeout-prevote = "1s"
-timeout-prevote-delta = "500ms"
-timeout-precommit = "1s"
-timeout-precommit-delta = "500ms"
-timeout-commit = "1s"
-```
-
-Note that in a successful round, the only timeout that we absolutely wait no
-matter what is `timeout-commit`.
-
-Here's a brief summary of the timeouts:
-
-- `timeout-propose` = how long we wait for a proposal block before prevoting
- nil
-- `timeout-propose-delta` = how much timeout-propose increases with each round
-- `timeout-prevote` = how long we wait after receiving +2/3 prevotes for
- anything (ie. not a single block or nil)
-- `timeout-prevote-delta` = how much the timeout-prevote increases with each
- round
-- `timeout-precommit` = how long we wait after receiving +2/3 precommits for
- anything (ie. not a single block or nil)
-- `timeout-precommit-delta` = how much the timeout-precommit increases with
- each round
-- `timeout-commit` = how long we wait after committing a block, before starting
- on the new height (this gives us a chance to receive some more precommits,
- even though we already have +2/3)
## P2P settings
@@ -597,7 +538,7 @@ This section will cover settings within the p2p section of the `config.toml`.
- `pex` = turns the peer exchange reactor on or off. Validator node will want the `pex` turned off so it would not begin gossiping to unknown peers on the network. PeX can also be turned off for statically configured networks with fixed network connectivity. For full nodes on open, dynamic networks, it should be turned on.
- `private-peer-ids` = is a comma-separated list of node ids that will _not_ be exposed to other peers (i.e., you will not tell other peers about the ids in this list). This can be filled with a validator's node id.
-Recently the Tendermint Team conducted a refactor of the p2p layer. This lead to multiple config paramters being deprecated and/or replaced.
+Recently the Tendermint Team conducted a refactor of the p2p layer. This lead to multiple config parameters being deprecated and/or replaced.
We will cover the new and deprecated parameters below.
### New Parameters
@@ -651,3 +592,27 @@ Example:
```shell
$ psql ... -f state/indexer/sink/psql/schema.sql
```
+
+## Unsafe Consensus Timeout Overrides
+
+Tendermint version v0.36 provides a set of unsafe overrides for the consensus
+timing parameters. These parameters are provided as a safety measure in case of
+unusual timing issues during the upgrade to v0.36 so that an operator may
+override the timings for a single node. These overrides will completely be
+removed in Tendermint v0.37.
+
+- `unsafe-propose-override`: How long the Tendermint consensus engine will wait
+ for a proposal block before prevoting nil.
+- `unsafe-propose-delta-override`: How much the propose timeout increase with
+ each round.
+- `unsafe-vote-override`: How long the consensus engine will wait after
+ receiving +2/3 votes in a round.
+- `unsafe-vote-delta-override`: How much the vote timeout increases with each
+ round.
+- `unsafe-commit-override`: How long the consensus engine will wait after
+ receiving +2/3 precommits before beginning the next height.
+- `unsafe-bypass-commit-timeout-override`: Configures if the consensus engine
+ will wait for the full commit timeout before proceeding to the next height. If
+ this field is set to true, the consensus engine will proceed to the next
+ height as soon as the node has gathered votes from all of the validators on
+ the network.
diff --git a/docs/nodes/logging.md b/docs/nodes/logging.md
index 31a9d08d20..9261dd0edf 100644
--- a/docs/nodes/logging.md
+++ b/docs/nodes/logging.md
@@ -50,7 +50,7 @@ little overview what they do.
they are coming from peers or the application.
- `p2p` Provides an abstraction around peer-to-peer communication. For
more details, please check out the
- [README](https://github.com/tendermint/spec/tree/master/spec/p2p).
+ [README](https://github.com/tendermint/tendermint/tree/master/spec/p2p).
- `rpc-server` RPC server. For implementation details, please read the
[doc.go](https://github.com/tendermint/tendermint/blob/v0.35.x/rpc/jsonrpc/doc.go).
- `state` Represents the latest state and execution submodule, which
@@ -120,7 +120,7 @@ Next follows a standard block creation cycle, where we enter a new
round, propose a block, receive more than 2/3 of prevotes, then
precommits and finally have a chance to commit a block. For details,
please refer to [Byzantine Consensus
-Algorithm](https://github.com/tendermint/spec/blob/master/spec/consensus/consensus.md).
+Algorithm](https://github.com/tendermint/tendermint/blob/master/spec/consensus/consensus.md).
```sh
I[10-04|13:54:30.393] enterNewRound(91/0). Current: 91/0/RoundStepNewHeight module=consensus
diff --git a/docs/nodes/metrics.md b/docs/nodes/metrics.md
index 6589e044aa..1b2e9f0070 100644
--- a/docs/nodes/metrics.md
+++ b/docs/nodes/metrics.md
@@ -40,6 +40,7 @@ The following metrics are available:
| consensus_fast_syncing | gauge | | either 0 (not fast syncing) or 1 (syncing) |
| consensus_state_syncing | gauge | | either 0 (not state syncing) or 1 (syncing) |
| consensus_block_size_bytes | Gauge | | Block size in bytes |
+| evidence_pool_num_evidence | Gauge | | Number of evidence in the evidence pool
| p2p_peers | Gauge | | Number of peers node's connected to |
| p2p_peer_receive_bytes_total | counter | peer_id, chID | number of bytes per channel received from a given peer |
| p2p_peer_send_bytes_total | counter | peer_id, chID | number of bytes per channel sent to a given peer |
diff --git a/docs/nodes/remote-signer.md b/docs/nodes/remote-signer.md
index e7dfccacdb..39a38e1b7a 100644
--- a/docs/nodes/remote-signer.md
+++ b/docs/nodes/remote-signer.md
@@ -37,7 +37,7 @@ There are two ways to generate certificates, [openssl](https://www.openssl.org/)
- Install `Certstrap`:
```sh
- go get github.com/square/certstrap@v1.2.0
+ go install github.com/square/certstrap@v1.2.0
```
- Create certificate authority for self signing.
diff --git a/docs/nodes/validators.md b/docs/nodes/validators.md
index b787fa8a46..e7c3a3cf43 100644
--- a/docs/nodes/validators.md
+++ b/docs/nodes/validators.md
@@ -109,9 +109,9 @@ Currently Tendermint uses [Ed25519](https://ed25519.cr.yp.to/) keys which are wi
> **+2/3 is short for "more than 2/3"**
A block is committed when +2/3 of the validator set sign [precommit
-votes](https://github.com/tendermint/spec/blob/953523c3cb99fdb8c8f7a2d21e3a99094279e9de/spec/blockchain/blockchain.md#vote) for that block at the same `round`.
+votes](https://github.com/tendermint/tendermint/blob/953523c3cb99fdb8c8f7a2d21e3a99094279e9de/spec/blockchain/blockchain.md#vote) for that block at the same `round`.
The +2/3 set of precommit votes is called a
-[_commit_](https://github.com/tendermint/spec/blob/953523c3cb99fdb8c8f7a2d21e3a99094279e9de/spec/blockchain/blockchain.md#commit). While any +2/3 set of
+[_commit_](https://github.com/tendermint/tendermint/blob/953523c3cb99fdb8c8f7a2d21e3a99094279e9de/spec/blockchain/blockchain.md#commit). While any +2/3 set of
precommits for the same block at the same height&round can serve as
validation, the canonical commit is included in the next block (see
-[LastCommit](https://github.com/tendermint/spec/blob/953523c3cb99fdb8c8f7a2d21e3a99094279e9de/spec/blockchain/blockchain.md#lastcommit)).
+[LastCommit](https://github.com/tendermint/tendermint/blob/953523c3cb99fdb8c8f7a2d21e3a99094279e9de/spec/blockchain/blockchain.md#lastcommit)).
diff --git a/docs/package-lock.json b/docs/package-lock.json
index 8bbdae8cc6..447c8c27d0 100644
--- a/docs/package-lock.json
+++ b/docs/package-lock.json
@@ -3037,9 +3037,9 @@
}
},
"node_modules/async": {
- "version": "2.6.3",
- "resolved": "https://registry.npmjs.org/async/-/async-2.6.3.tgz",
- "integrity": "sha512-zflvls11DCy+dQWzTW2dzuilv8Z5X/pjfmZOWba6TNIVDm+2UDaJmXSOXlasHKfNBs8oo3M0aT50fDEWfKZjXg==",
+ "version": "2.6.4",
+ "resolved": "https://registry.npmjs.org/async/-/async-2.6.4.tgz",
+ "integrity": "sha512-mzo5dfJYwAn29PeiJ0zvwTo04zj8HDJj0Mn8TD7sno7q12prdbnasKJHhkm2c1LgrhlJ0teaea8860oxi51mGA==",
"dependencies": {
"lodash": "^4.17.14"
}
@@ -8876,9 +8876,9 @@
}
},
"node_modules/minimist": {
- "version": "1.2.5",
- "resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz",
- "integrity": "sha512-FM9nNUYrRBAELZQT3xeZQ7fmMOBg6nWNmJKTcgsJeaLstP/UODVpGsr5OhXhhXg6f+qtJ8uiZ+PUxkDWcgIXLw=="
+ "version": "1.2.6",
+ "resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.6.tgz",
+ "integrity": "sha512-Jsjnk4bw3YJqYzbdyBiNsPWHPfO++UGG749Cxs6peCu5Xg4nrena6OVxOYxrQTqww0Jmwt+Ref8rggumkTLz9Q=="
},
"node_modules/mississippi": {
"version": "3.0.0",
@@ -10389,9 +10389,9 @@
}
},
"node_modules/prismjs": {
- "version": "1.26.0",
- "resolved": "https://registry.npmjs.org/prismjs/-/prismjs-1.26.0.tgz",
- "integrity": "sha512-HUoH9C5Z3jKkl3UunCyiD5jwk0+Hz0fIgQ2nbwU2Oo/ceuTAQAg+pPVnfdt2TJWRVLcxKh9iuoYDUSc8clb5UQ==",
+ "version": "1.27.0",
+ "resolved": "https://registry.npmjs.org/prismjs/-/prismjs-1.27.0.tgz",
+ "integrity": "sha512-t13BGPUlFDR7wRB5kQDG4jjl7XeuH6jbJGt11JHPL96qwsEHNX2+68tFXqc1/k+/jALsbSWJKUOT/hcYAZ5LkA==",
"engines": {
"node": ">=6"
}
@@ -13045,9 +13045,9 @@
}
},
"node_modules/url-parse": {
- "version": "1.5.7",
- "resolved": "https://registry.npmjs.org/url-parse/-/url-parse-1.5.7.tgz",
- "integrity": "sha512-HxWkieX+STA38EDk7CE9MEryFeHCKzgagxlGvsdS7WBImq9Mk+PGwiT56w82WI3aicwJA8REp42Cxo98c8FZMA==",
+ "version": "1.5.10",
+ "resolved": "https://registry.npmjs.org/url-parse/-/url-parse-1.5.10.tgz",
+ "integrity": "sha512-WypcfiRhfeUP9vvF0j6rw0J3hrWrw6iZv3+22h6iRMJ/8z1Tj6XfLP4DsUix5MhMPnXpiHDoKyoZ/bdCkwBCiQ==",
"dependencies": {
"querystringify": "^2.1.1",
"requires-port": "^1.0.0"
@@ -16588,9 +16588,9 @@
"integrity": "sha1-WWZ/QfrdTyDMvCu5a41Pf3jsA2c="
},
"async": {
- "version": "2.6.3",
- "resolved": "https://registry.npmjs.org/async/-/async-2.6.3.tgz",
- "integrity": "sha512-zflvls11DCy+dQWzTW2dzuilv8Z5X/pjfmZOWba6TNIVDm+2UDaJmXSOXlasHKfNBs8oo3M0aT50fDEWfKZjXg==",
+ "version": "2.6.4",
+ "resolved": "https://registry.npmjs.org/async/-/async-2.6.4.tgz",
+ "integrity": "sha512-mzo5dfJYwAn29PeiJ0zvwTo04zj8HDJj0Mn8TD7sno7q12prdbnasKJHhkm2c1LgrhlJ0teaea8860oxi51mGA==",
"requires": {
"lodash": "^4.17.14"
}
@@ -21113,9 +21113,9 @@
}
},
"minimist": {
- "version": "1.2.5",
- "resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz",
- "integrity": "sha512-FM9nNUYrRBAELZQT3xeZQ7fmMOBg6nWNmJKTcgsJeaLstP/UODVpGsr5OhXhhXg6f+qtJ8uiZ+PUxkDWcgIXLw=="
+ "version": "1.2.6",
+ "resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.6.tgz",
+ "integrity": "sha512-Jsjnk4bw3YJqYzbdyBiNsPWHPfO++UGG749Cxs6peCu5Xg4nrena6OVxOYxrQTqww0Jmwt+Ref8rggumkTLz9Q=="
},
"mississippi": {
"version": "3.0.0",
@@ -22350,9 +22350,9 @@
"integrity": "sha512-28iF6xPQrP8Oa6uxE6a1biz+lWeTOAPKggvjB8HAs6nVMKZwf5bG++632Dx614hIWgUPkgivRfG+a8uAXGTIbA=="
},
"prismjs": {
- "version": "1.26.0",
- "resolved": "https://registry.npmjs.org/prismjs/-/prismjs-1.26.0.tgz",
- "integrity": "sha512-HUoH9C5Z3jKkl3UunCyiD5jwk0+Hz0fIgQ2nbwU2Oo/ceuTAQAg+pPVnfdt2TJWRVLcxKh9iuoYDUSc8clb5UQ=="
+ "version": "1.27.0",
+ "resolved": "https://registry.npmjs.org/prismjs/-/prismjs-1.27.0.tgz",
+ "integrity": "sha512-t13BGPUlFDR7wRB5kQDG4jjl7XeuH6jbJGt11JHPL96qwsEHNX2+68tFXqc1/k+/jALsbSWJKUOT/hcYAZ5LkA=="
},
"process": {
"version": "0.11.10",
@@ -24536,9 +24536,9 @@
}
},
"url-parse": {
- "version": "1.5.7",
- "resolved": "https://registry.npmjs.org/url-parse/-/url-parse-1.5.7.tgz",
- "integrity": "sha512-HxWkieX+STA38EDk7CE9MEryFeHCKzgagxlGvsdS7WBImq9Mk+PGwiT56w82WI3aicwJA8REp42Cxo98c8FZMA==",
+ "version": "1.5.10",
+ "resolved": "https://registry.npmjs.org/url-parse/-/url-parse-1.5.10.tgz",
+ "integrity": "sha512-WypcfiRhfeUP9vvF0j6rw0J3hrWrw6iZv3+22h6iRMJ/8z1Tj6XfLP4DsUix5MhMPnXpiHDoKyoZ/bdCkwBCiQ==",
"requires": {
"querystringify": "^2.1.1",
"requires-port": "^1.0.0"
diff --git a/docs/pre.sh b/docs/pre.sh
index 37193d265b..76a1cff99a 100755
--- a/docs/pre.sh
+++ b/docs/pre.sh
@@ -1,3 +1,4 @@
#!/bin/bash
cp -a ../rpc/openapi/ .vuepress/public/rpc/
+cp -r ../spec .
diff --git a/docs/presubmit.sh b/docs/presubmit.sh
new file mode 100755
index 0000000000..19e931a4f2
--- /dev/null
+++ b/docs/presubmit.sh
@@ -0,0 +1,39 @@
+#!/bin/bash
+#
+# This script verifies that each document in the docs and architecture
+# directory has a corresponding table-of-contents entry in its README file.
+#
+# This can be run manually from the command line.
+# It is also run in CI via the docs-toc.yml workflow.
+#
+set -euo pipefail
+
+readonly base="$(dirname $0)"
+cd "$base"
+
+readonly workdir="$(mktemp -d)"
+trap "rm -fr -- '$workdir'" EXIT
+
+checktoc() {
+ local dir="$1"
+ local tag="$2"'-*-*'
+ local out="$workdir/${dir}.out.txt"
+ (
+ cd "$dir" >/dev/null
+ find . -type f -maxdepth 1 -name "$tag" -not -exec grep -q "({})" README.md ';' -print
+ ) > "$out"
+ if [[ -s "$out" ]] ; then
+ echo "-- The following files in $dir lack a ToC entry:
+"
+ cat "$out"
+ return 1
+ fi
+}
+
+err=0
+
+# Verify that each RFC and ADR has a ToC entry in its README file.
+checktoc architecture adr || ((err++))
+checktoc rfc rfc || ((err++))
+
+exit $err
diff --git a/docs/rfc/images/abci++.png b/docs/rfc/images/abci++.png
new file mode 100644
index 0000000000000000000000000000000000000000..d5146f99573950d5b5a9f7fce594a2cb3c719c5e
GIT binary patch
literal 2792638
zcmZU(WmH^2moAJ$a0#wK6RdG}ch}(V?jGDB1a}hLEi~@XxI2wE?(TB)&dhi3to7Ae
z=Tx1oXIK4rs&?&Y6(wmjWFllJC@3^p83{Egs85GbQ1CyI5dKjZs=4l=pipIO#Kl!)
z#l@A?6WTI2G5p~o@@bdK3WUYf$io!M|zRebgO;aMt;M#{#|5PzmK%(?C
z*M(nVrYxvAtcdz!2W?)xJoab#r9WhJv`we(_cblN`}#%aLuazn`J3B*&PU34XC|~r
zX&PPX{_i%ZU3p~lPf5S4u56AN)$m2O5Meotryyg+r;m>WP!^Z(UOs-Lt75G?6}!&|
zuR*wi!77u8P|=Y*>J$eD;vE=J9VviRTqu#-aNq1Bcr}_lr21e~TBLf7=*EQ>jcE3T
z7CDRLl%8O3L@0BjM7l8qk#_iF-SAiY;lcnMQ|@RHMre^pR2NL60UQw$_^Zp;*K#@3
zvma*r+yW~>ka3co-K&Pkx;DJjpZx=_He^nqkClWErYE16UV?k)X^Dn1IE#roCvLY=
zDF#-w9Wq%OjU-BXU5CW1t38rs_MWh1#U7N{h-K6rBw0iIIH~+h
znuIMeRBfd1)XZD7Hvag?d)v`Oliye7!&*dhXdrc*UGV~$^fj~ZVU*mN)!W_Quw4R7CQx`NJLaT)sN^lM8U;(
zG8zf#0Dvmg`yg9qkKnK01D4l)67c{s4JiDgEb_!uyaBrMn*nw=Z?m0+r!7ee
z1ZNQfV`zAiOk?_C=$-q#eX>VIK^$mr5rl?d%obaYP_pt0!cQHvA*Ppf5@HnO*C?73
z$f957gUDtpYHKLuIrYk}hvAheCJ|>SFaov9I;h2G)OyUyus*2x$oI1lBVEt)`MBl>
ztT|#t4bvtIXjpRyq~a*0Fms*N1&}a$%(gh;&@tBVABe~AD+RYd{SLkS=^Q^T+rlZS
zPx(?qhDU(HXA*{_IFY%z7zY;D0tkG@2z&iWY_(I;gG&xiRexnn07kLp%^FW#2%36p
zCh9tB110Q8iGGtx!Hj`C~D3-!slD&1(05QR_
zxu66StNKtKc4HIiI=oXDPnqaQtr@cgdZM_rf>ApB>!=T`#M0>U)1tNN-fHk!vTu!u
zPLaPb%%sKVHH
zSXjjFW7946&I?4SGUcvg67>G1U@Ryr`bT8l-`^t{0B|Dke3#K|(0DxvJbk_bKe&;g
zzEF_FLAy|VDheK{8>4K=voPk%fZq>gQ-MwkUML{4hsy5Zs>3;jp4g(`K~M_5*}`r?
zIPY1QgU<^!yu`buLJ|`@M1o}%WyInifWH#)PokhEi5A8F$w3V}B1IhqlNrt^LjMz2
zhO+^yTslw^onq)`qYR=eiq8*ss@k84Un6E!VCPAiq^19KeI<^UN}iUr=Hf$Rl9HdH
zvBv!#c`lMYZFcB$OSBp-B>lqj!^T8m9NBHqpA{W1ve-;F12sD|)htzqbTO)_Z}SOx
zG5XG!+?jH+N4JGRzgN|pZE~n|+wF4Do3jhesZV)J`x4O`@E$Hgmw}~^^bW^tBJd@f
zC09kNjN1OoDP2yOvO4}YQ<981Hffa5rHumBKBg5mKF$)B7gi!Re^g>r05$eV*sd5d
z^D%C7*
z(mRP;gssNl2*_fO7tfYapH7Gd3FaD#7>U)El9z6lb}w04CrjAw
zXn3zusAs65ctF2u#J32^2+oK|RQ^nYxx5jJeb#ig2bNlPVWLdp`uH%Z*f^qiX5vnE
zGD{LuMh<4~4GaH?$HGrz)N9mB=B=`gBZBH{^DFK-NA7cXC3K2@5rn8H1R!!PA^WL1
zuw_|hqi)EGLZhN3{`m7a(>T>c@xI+eV)la$3zHb5E2EY!3GfhTu7jW(!=$ZCR{t7m
zF>Km+t)p3mQ)S4&UipQ@x=TD`@~@J!M=>Ei%XZQ(tg$6*MZ4lXmzr|
zp{~GF*f2Ah82CW7|ibZ-L?knMp
zz|NoyIO&KgJyAgLIyO69FAr?l%zyrTZnh@Brnhe7!{c`;Xe&_TJ1a;faNrB{?jqS^
zljE^{E6X-St*ti{_HFVBxWB(j^WpaOdRcxpf1H0_yX3r#e=s=p10`v
z5(9<_Sn$`oE%Pq}#MZ?ue!qxsy1eo>@=EvD)eYBO*WEk6UG82UU%p^1Vy$4IVRa@M
zCb3f`N@4x^QN&xMbkyavR=PPlW|8J
zzVUs-Q9oASD2^zukvSs~c6~j)I+#7%R4o93LcGhG=tNhYDi
zRpjGead+L=6<2pJxhy%IpfE48w4Zt$h=nX%J+rlDHf47DqX(gVv{Fda$xSn%ey+h3
znm``^o^;2^&*Ep{+5A;U|%n5Ma06xt>
zwcZ2)To`m!uPsH7Ll)I;W_^ki1EXGl|5j*M(DH-~{UGON9Kesk2Li73|9Z}(yNW)w
zA~(m3#wf==#ZJBLzN;@N!^$_L{RP|^wgnQpWSXzs?e&f|S5NDycI?cI)-z;l)oZ2J
z`ZoHG9yg{I(sNh+u2HQzs){MM)wOH)Tv=6K_;YN>`J1!dcET2ap-YcW$IB;R_!)$@
zPfX9{|El|XGZCLeoPB(HLeGuGjqB*!z-w`HWaJ-IzG>Q4Y&v#O{i`|`j19~!dnm8n
zbOZ+2z^)cpA8xR^3>D{^@aqv75ObfKo>wfRc`|r@T-kwYmmVZ8Y``b&%`W-tU8`?#
zZ+nU|b8#gbma1-6^55l$f>{w)zr%ii_~6=Uu12gzUnF4@_G@kI9bqu=c$N_Y2Jp?slUGKxxp~@$I?5EyZbXcw{ujg}{!m@4L?Xqq_DH
z&U5l;X`1p$zK;O^GU-gfQR1f{cfoX_i;HhB>K9Y1_z8gfH{9!`@QtfZ1Aqkp(U2+7
z;oA2kTh(dxdFSQ4dm;$@QS;h4^5Q`5FP!t9@qA_TajRUUteKxH)EpT7?&?JmWGph8
ze5IiWjf&2}fbv6I54xZfx~B#10Rw0LKp>UkxUn2L_#sR&?ZLpz&4{FI15hl+YLg!t)Xus{aORawmMx
zjLq>)v0Bun{iRzJwgEfdIPB&~H*}j^kTMdQ6#c)W#V2POT{kEwJevPJ(6VaOmrzj9
z`8FCl?mCJJd}dA#%qHeerWVZJ4$l9ip`Zl3`Th+ZEZj{fydCTv-T1r(DgR4?@89@8
zWB?__e~Gx;2~z4Ps!)hKxmr+gGIKDqPzoVaP*4cCnp^UzNl5*V`oAwhN^5s_XFdSH
z%gc+|i=ElY)e6AM%gYO3VFR$SG5wQZa`SO?H}Ph2bffxjBmbWr2@5wfR~u({8z)DK
z|JXG#b@Ff*q@?_hqyO9f`#&wbZT_DpN4Nh8>tBF?|8M}T%q)QaYx|$7z<;QGDmLC0
z_Bs+a4i=7X|9l9sak6s>{FlQ2kM;jt{x4PC|EJ2z!N&2wRsR?3e^dnk{|VuLh4kO^
z^tY3vYtgeM7a%M
zn-dql6mW0JgRnETUbS@eC@eI9f
zI6s^Oo{pw6OcAG<4JEkT6f@qxy|#+2c%cH>r@B7P_6VF#wpKOf1T