Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

p2p: improve PEX reactor #6305

Merged
merged 27 commits into from Apr 26, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
3c39642
modify PEX reactor and add scaffold for tests
cmwaters Apr 1, 2021
7e53f6e
create pex test suite
cmwaters Apr 1, 2021
d747b78
add mock nodes to the test suite
cmwaters Apr 1, 2021
a89cb5c
reactor tests
tychoish Apr 1, 2021
64f19d1
fix lint
tychoish Apr 1, 2021
230b4ef
checkpoint
cmwaters Apr 8, 2021
69c9291
add new pex messages
cmwaters Apr 16, 2021
d45ead1
add tests
cmwaters Apr 16, 2021
4a603e6
lint
cmwaters Apr 16, 2021
4079b06
Merge branch 'master' into callum/p2p-seed
cmwaters Apr 16, 2021
9f8b618
fmt
cmwaters Apr 16, 2021
437e3d8
Apply suggestions from code review
cmwaters Apr 20, 2021
86bc93c
make two separate go routines
cmwaters Apr 22, 2021
eb85aa5
accomodate for the legacy pex messages
cmwaters Apr 22, 2021
c07a540
add changelog
cmwaters Apr 22, 2021
675452d
Merge branch 'master' into callum/p2p-seed
cmwaters Apr 22, 2021
96898ec
create Node Options and rename Capacity function
cmwaters Apr 22, 2021
a3f8911
Merge branch 'callum/p2p-seed' of github.com:tendermint/tendermint in…
cmwaters Apr 22, 2021
10c2cd6
fix changelog
cmwaters Apr 22, 2021
c513162
implement suggested changes
cmwaters Apr 23, 2021
9f44019
fix tests
cmwaters Apr 23, 2021
b2c2ff5
Merge branch 'master' into callum/p2p-seed
cmwaters Apr 23, 2021
44027c7
Merge branch 'master' into callum/p2p-seed
cmwaters Apr 23, 2021
7a2e528
make minor changes to the test
cmwaters Apr 26, 2021
d1b0098
update comment
cmwaters Apr 26, 2021
68c8e04
update another comment
cmwaters Apr 26, 2021
e5467f6
modify some more comments
cmwaters Apr 26, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
3 changes: 3 additions & 0 deletions CHANGELOG_PENDING.md
Expand Up @@ -57,6 +57,9 @@ Friendly reminder: We have a [bug bounty program](https://hackerone.com/tendermi

- [config] Add `--mode` flag and config variable. See [ADR-52](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-052-tendermint-mode.md) @dongsam
- [rpc] \#6329 Don't cap page size in unsafe mode (@gotjoshua, @cmwaters)
- [pex] \#6305 v2 pex reactor with backwards compatability. Introduces two new pex messages to
cmwaters marked this conversation as resolved.
Show resolved Hide resolved
accomodate for the new p2p stack. Removes the notion of seeds and crawling. All peer
exchange reactors behave the same. (@cmwaters)
- [crypto] \#6376 Enable sr25519 as a validator key

### IMPROVEMENTS
Expand Down
5 changes: 4 additions & 1 deletion blockchain/v0/reactor_test.go
Expand Up @@ -300,7 +300,10 @@ func TestReactor_BadBlockStopsPeer(t *testing.T) {
// XXX: This causes a potential race condition.
// See: https://github.com/tendermint/tendermint/issues/6005
otherGenDoc, otherPrivVals := randGenesisDoc(config, 1, false, 30)
newNode := rts.network.MakeNode(t)
newNode := rts.network.MakeNode(t, p2ptest.NodeOptions{
MaxPeers: uint16(len(rts.nodes) + 1),
MaxConnected: uint16(len(rts.nodes) + 1),
})
rts.addNode(t, newNode.NodeID, otherGenDoc, otherPrivVals[0], maxBlockHeight)

// add a fake peer just so we do not wait for the consensus ticker to timeout
Expand Down
2 changes: 1 addition & 1 deletion node/node.go
Expand Up @@ -646,7 +646,7 @@ func createPeerManager(
}

for _, peer := range peers {
if err := peerManager.Add(peer); err != nil {
if _, err := peerManager.Add(peer); err != nil {
return nil, fmt.Errorf("failed to add peer %q: %w", peer, err)
}
}
Expand Down
20 changes: 16 additions & 4 deletions p2p/p2ptest/network.go
Expand Up @@ -30,6 +30,12 @@ type Network struct {
type NetworkOptions struct {
NumNodes int
BufferSize int
NodeOpts NodeOptions
}

type NodeOptions struct {
MaxPeers uint16
MaxConnected uint16
}

func (opts *NetworkOptions) setDefaults() {
Expand All @@ -50,7 +56,7 @@ func MakeNetwork(t *testing.T, opts NetworkOptions) *Network {
}

for i := 0; i < opts.NumNodes; i++ {
node := network.MakeNode(t)
node := network.MakeNode(t, opts.NodeOpts)
network.Nodes[node.NodeID] = node
}

Expand Down Expand Up @@ -81,7 +87,9 @@ func (n *Network) Start(t *testing.T) {
for _, targetAddress := range dialQueue[i+1:] { // nodes <i already connected
targetNode := n.Nodes[targetAddress.NodeID]
targetSub := subs[targetAddress.NodeID]
require.NoError(t, sourceNode.PeerManager.Add(targetAddress))
added, err := sourceNode.PeerManager.Add(targetAddress)
require.NoError(t, err)
require.True(t, added)

select {
case peerUpdate := <-sourceSub.Updates():
Expand All @@ -107,7 +115,9 @@ func (n *Network) Start(t *testing.T) {

// Add the address to the target as well, so it's able to dial the
// source back if that's even necessary.
require.NoError(t, targetNode.PeerManager.Add(sourceAddress))
added, err = targetNode.PeerManager.Add(sourceAddress)
require.NoError(t, err)
require.True(t, added)
}
}
}
Expand Down Expand Up @@ -214,7 +224,7 @@ type Node struct {
// MakeNode creates a new Node configured for the network with a
// running peer manager, but does not add it to the existing
// network. Callers are responsible for updating peering relationships.
func (n *Network) MakeNode(t *testing.T) *Node {
func (n *Network) MakeNode(t *testing.T, opts NodeOptions) *Node {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are there cases where the NodeOptions are different fro different members of the network, or is it just safe to set it (and the options, on the network itself?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm probs not. I was thinking that you may want to test a network with differently configured nodes but the default was that they were the same. Are you suggesting that the Network saves a copy of the options and just uses that whenever MakeNode is called?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this change is fine, but I think the more options and whatever you have to pass around the harder it is to get the right thing to happen in your tests, so I'm just sort of generically/reflexively coming down on the side of "make interfaces simpler/smaller".

privKey := ed25519.GenPrivKey()
nodeID := p2p.NodeIDFromPubKey(privKey.PubKey())
nodeInfo := p2p.NodeInfo{
Expand All @@ -230,6 +240,8 @@ func (n *Network) MakeNode(t *testing.T) *Node {
MinRetryTime: 10 * time.Millisecond,
MaxRetryTime: 100 * time.Millisecond,
RetryTimeJitter: time.Millisecond,
MaxPeers: opts.MaxPeers,
MaxConnected: opts.MaxConnected,
})
require.NoError(t, err)

Expand Down
34 changes: 25 additions & 9 deletions p2p/peermanager.go
Expand Up @@ -384,13 +384,14 @@ func (m *PeerManager) prunePeers() error {
}

// Add adds a peer to the manager, given as an address. If the peer already
// exists, the address is added to it if not already present.
func (m *PeerManager) Add(address NodeAddress) error {
// exists, the address is added to it if it isn't already present. This will push
// low scoring peers out of the address book if it exceeds the maximum size.
func (m *PeerManager) Add(address NodeAddress) (bool, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we ever handle the false, nil case differently than the true, nil case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, we count how many nodes are new vs how many are already in the peer store and use it as a heuristic for calculating how often we should ping nodes for more addresses.

The thinking is that if I'm seeing a lot of new addresses I should be pinging more often to full up the peer store and once I stop seeing new addresses I slow down the cadence with which I send requests

if err := address.Validate(); err != nil {
return err
return false, err
}
if address.NodeID == m.selfID {
return fmt.Errorf("can't add self (%v) to peer store", m.selfID)
return false, fmt.Errorf("can't add self (%v) to peer store", m.selfID)
}

m.mtx.Lock()
Expand All @@ -400,17 +401,32 @@ func (m *PeerManager) Add(address NodeAddress) error {
if !ok {
peer = m.newPeerInfo(address.NodeID)
}
if _, ok := peer.AddressInfo[address]; !ok {
peer.AddressInfo[address] = &peerAddressInfo{Address: address}
_, ok = peer.AddressInfo[address]
// if we already have the peer address, there's no need to continue
if ok {
return false, nil
}

// else add the new address
peer.AddressInfo[address] = &peerAddressInfo{Address: address}
if err := m.store.Set(peer); err != nil {
return err
return false, err
}
if err := m.prunePeers(); err != nil {
return err
return true, err
}
m.dialWaker.Wake()
return nil
return true, nil
}

// PeerRatio returns the ratio of peer addresses stored to the maximum size.
func (m *PeerManager) PeerRatio() float64 {
if m.options.MaxPeers == 0 {
return 0
}
m.mtx.Lock()
defer m.mtx.Unlock()
return float64(m.store.Size()) / float64(m.options.MaxPeers)
}

// DialNext finds an appropriate peer address to dial, and marks it as dialing.
Expand Down
4 changes: 3 additions & 1 deletion p2p/peermanager_scoring_test.go
Expand Up @@ -23,7 +23,9 @@ func TestPeerScoring(t *testing.T) {

// create a fake node
id := NodeID(strings.Repeat("a1", 20))
require.NoError(t, peerManager.Add(NodeAddress{NodeID: id, Protocol: "memory"}))
added, err := peerManager.Add(NodeAddress{NodeID: id, Protocol: "memory"})
require.NoError(t, err)
require.True(t, added)

t.Run("Synchronous", func(t *testing.T) {
// update the manager and make sure it's correct
Expand Down