Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expire and republish records received via PutValue #388

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

aarshkshah1992
Copy link
Contributor

@aarshkshah1992 aarshkshah1992 commented Sep 2, 2019

For issue #323

  1. Expire non-provider records older than MaxAge
  2. Original publisher should republish putValue records
  3. Peers that receive a PutValue record should republish records older than an hour

routing.go Outdated
@@ -27,6 +27,8 @@ import (
// results will wait for the channel to drain.
var asyncQueryBuffer = 10

var putValueRepublishInterval = 6 * time.Hour
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will change after the answer to the question here

records.go Outdated
// it must be rebroadcasted more frequently than once every 'MaxRecordAge'
const MaxRecordAge = time.Hour * 36
// it must be rebroadcasted more frequently than once every 'maxNonProviderRecordAge'
var maxNonProviderRecordAge = time.Hour * 12
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change based on answer to this question

Copy link
Contributor

@bigs bigs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one minor comment

records.go Outdated

res, err := dht.datastore.Query(query.Query{Filters: []query.Filter{&expireRecordFilter{}}})
if err != nil {
logger.Errorf("expire records proc: failed to run query against datastore, error is %+v", err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be %s instead or does the error carry necessary metadata?

Copy link
Contributor

@bigs bigs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's refactor the putvalue system to use a single coroutine

2. Original publisher should republish putvalue records
3. Peers who receive a record will republish hourly
case <-time.After(putValueRepublishInterval):
// TODO:We can not re-use the original context here as it may have expired
// But, is it fair to use this one ?
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is 5 minutes too long a timeout for PutValue ?

@@ -396,3 +396,7 @@ func (dht *IpfsDHT) handleAddProvider(ctx context.Context, p peer.ID, pmes *pb.M
func convertToDsKey(s []byte) ds.Key {
return ds.NewKey(base32.RawStdEncoding.EncodeToString(s))
}

func convertToOriginalKey(k string) ([]byte, error) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not used anymore. Need to remove this.

var nonProvRecordRePublishAge = 1 * time.Hour
var enableRepublishJitter = true

type NonProvRecordsManager struct {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can not put this in a separate package as IpfsDHT depends on this & this depends on IpfsDHT.

}

// call put value
putCtx, cancel := context.WithTimeout(m.ctx, 2*time.Minute)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bigs Is 2 mins a reasonable timeout here ?

// republish records older than prescribed age
m.rePublishInterval = nonProvRecordRePublishInterval
m.proc.Go(m.rePublish)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bigs We can have separate dedicated go-routines for expiry & republish as we don't need to serialize access to a resource/synchronize between them.

// unmarshal record
rec := new(recpb.Record)
if err := proto.Unmarshal(e.Value, rec); err != nil {
logger.Debugf("expire records filter: failed to unmarshal DHT record from datastore, error is %s", err)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change tag in the logging message

// parse received time
t, err := u.ParseRFC3339(rec.TimeReceived)
if err != nil {
logger.Debugf("expire records filter: failed to parse time in DHT record, error is %s", err)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change tag in the logging message

@aarshkshah1992 aarshkshah1992 changed the title Expire PutValue records & original publisher should republish Expire and republish records received via PutValue Sep 8, 2019
@aarshkshah1992
Copy link
Contributor Author

aarshkshah1992 commented Sep 8, 2019

@bigs @Stebalien This PR is now ready for review. Have addressed all three TODOs that we discussed. Please take a look. Thanks a lot !

@aarshkshah1992
Copy link
Contributor Author

aarshkshah1992 commented Sep 8, 2019

@bigs

let's refactor the putvalue system to use a single coroutine

I have refactored the system to expire & republish non provider records into it' s own structure. However, I am not sure we need to do it in a single go-routine as commented here. Let me know what you think.

@aarshkshah1992
Copy link
Contributor Author

Hey @bigs,

Reminder to take a look. Thanks a lot for your time.

@bigs
Copy link
Contributor

bigs commented Oct 14, 2019

Going to check this out first thing tomorrow. Thanks and sorry for the delay, lost this one in the other PRs.

@aarshkshah1992
Copy link
Contributor Author

Hey @bigs ,

This is code complete. Please take a look when you can. This is also a good candidate to exercise the libp2p-testlab.

@aarshkshah1992
Copy link
Contributor Author

@raulk This can be taken to conclusion if it gets some love in the form of a review :) Do you have some concerns surrounding this feature ?

bkase pushed a commit to MinaProtocol/mina that referenced this pull request Feb 10, 2020
This is the revival of the tmp/cmr/net2 branch rebased onto develop. 

Some important user-facing changes:

- No separate discovery/communication/etc ports. One port for all public daemon communications.
- Automatic port forwarding with UPnP. If your local network supports UPnP, there should be no configuration required.
- Local peer discovery. If your local network supports mDNS broadcast, coda daemons will automatically discover each other. This includes several daemons on the same machine- no more building peer lists!
- New libp2p keypairs. These are managed  the same as our key pairs with secret_file. Without configuration, key pairs are ephemeral and will disappear when the daemon restarts. (TODO: should we instead persist  the keypair? does it matter for  non-infrastructure?)

Some important internal changes:

- All daemon-daemon connections are now authenticated and confidential.
- Connections are no longer transient and per-request. Individual requests get multiplexed as their own stream over the one connection between the peers. This is analogous to HTTP/2. Outgoing connections will appear to originate from the libp2p listening port, vs some transient port. 

Outstanding details:

- Trust system needs to get augmented to track Peer.t instead of just an IP. Until then we can't implement ban_notify (#4093, #4096).
- Libp2p has little per-connection structured reporting, some things we currently penalize trust for are not detected (eg opening a libp2p connection without also
opening a coda RPC stream) (#4098).
- New pubsub allows banning senders by peer ID. We currently don't do this but we should ban peerIDs that originated bad info and not just the IP of the whoever relayed it to us (#4096).
- ~~Current pubsub validation flow goes a bit against the libp2p grain, and it's not clear to me that the current behavior will survive [this libp2p PR](libp2p/go-libp2p-kad-dht#388). There's an inline comment near the should_forward_message impl (#4097).~~ done
- Connection limit enforcement (#4095)

Other changes:

- Rips out the last vestiges of old membership, which aren't in use.
- The connection info in envelopes is much more accurate now. We shouldn't start trusting it just yet due to some future vagaries around relaying.
- bump nixpkgs version

Future improvements:

- IPv6. There's a hardcoded IPv4 assumption in the helper around IP filtering. 
- Investigate libp2p autorelay. This should help nodes in restrictive networks achieve better connectivity, but has a host of problems.
- Intelligent request routing. I believe we can use the "provider" feature to, at the very least, only send eg sync/bootstrap requests to nodes who believe themselves to be in sync. There are other options.
mobileink pushed a commit to minatools/libp2p_helper that referenced this pull request Oct 9, 2020
This is the revival of the tmp/cmr/net2 branch rebased onto develop. 

Some important user-facing changes:

- No separate discovery/communication/etc ports. One port for all public daemon communications.
- Automatic port forwarding with UPnP. If your local network supports UPnP, there should be no configuration required.
- Local peer discovery. If your local network supports mDNS broadcast, coda daemons will automatically discover each other. This includes several daemons on the same machine- no more building peer lists!
- New libp2p keypairs. These are managed  the same as our key pairs with secret_file. Without configuration, key pairs are ephemeral and will disappear when the daemon restarts. (TODO: should we instead persist  the keypair? does it matter for  non-infrastructure?)

Some important internal changes:

- All daemon-daemon connections are now authenticated and confidential.
- Connections are no longer transient and per-request. Individual requests get multiplexed as their own stream over the one connection between the peers. This is analogous to HTTP/2. Outgoing connections will appear to originate from the libp2p listening port, vs some transient port. 

Outstanding details:

- Trust system needs to get augmented to track Peer.t instead of just an IP. Until then we can't implement ban_notify (#4093, #4096).
- Libp2p has little per-connection structured reporting, some things we currently penalize trust for are not detected (eg opening a libp2p connection without also
opening a coda RPC stream) (#4098).
- New pubsub allows banning senders by peer ID. We currently don't do this but we should ban peerIDs that originated bad info and not just the IP of the whoever relayed it to us (#4096).
- ~~Current pubsub validation flow goes a bit against the libp2p grain, and it's not clear to me that the current behavior will survive [this libp2p PR](libp2p/go-libp2p-kad-dht#388). There's an inline comment near the should_forward_message impl (#4097).~~ done
- Connection limit enforcement (#4095)

Other changes:

- Rips out the last vestiges of old membership, which aren't in use.
- The connection info in envelopes is much more accurate now. We shouldn't start trusting it just yet due to some future vagaries around relaying.
- bump nixpkgs version

Future improvements:

- IPv6. There's a hardcoded IPv4 assumption in the helper around IP filtering. 
- Investigate libp2p autorelay. This should help nodes in restrictive networks achieve better connectivity, but has a host of problems.
- Intelligent request routing. I believe we can use the "provider" feature to, at the very least, only send eg sync/bootstrap requests to nodes who believe themselves to be in sync. There are other options.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants