Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: pullsync make offer timeout #3520

Merged
merged 3 commits into from Nov 10, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 1 addition & 1 deletion pkg/puller/puller.go
Expand Up @@ -29,7 +29,7 @@ const loggerName = "puller"

var errCursorsLength = errors.New("cursors length mismatch")

const DefaultSyncErrorSleepDur = time.Second * 5
const DefaultSyncErrorSleepDur = time.Second * 30
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm just curious as to why the length of sleep needs to be increased.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's an arbitrary value, 5 seconds seemed a bit aggressive to me after some thinking


type Options struct {
Bins uint8
Expand Down
15 changes: 15 additions & 0 deletions pkg/pullsync/pullstorage/pullstorage_test.go
Expand Up @@ -111,6 +111,21 @@ func TestIntervalChunks(t *testing.T) {
}
}

func TestIntervalChunksTimeout(t *testing.T) {
t.Parallel()

ctx, cancel := context.WithTimeout(context.Background(), 0)
defer cancel()
<-ctx.Done()

ps, _ := newPullStorage(t)

_, _, err := ps.IntervalChunks(ctx, 0, 0, 0, limit)
if !errors.Is(err, context.DeadlineExceeded) {
t.Fatal(err)
}
}

// Get some descriptor from the chunk channel, then block for a while
// then add more chunks to the subscribe pull iterator and make sure the loop
// exits correctly.
Expand Down
5 changes: 5 additions & 0 deletions pkg/pullsync/pullsync.go
Expand Up @@ -50,6 +50,7 @@ var (

const (
storagePutTimeout = 5 * time.Second
makeOfferTimeout = 5 * time.Minute
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My nodes are (well, were before the stampes expired) routinely taking longer than 5 minutes to construct the offer. makeOffer's context is automatically cancelled at the end of the commit round by the agent, so why have a local timeout specified here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is puller/pullsync stuff, not reserve sampler :)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duh. makeOffer, not makeSample! That's what I get for commenting before I'm fully awake. I'll blame it on Nicole, the hurricane that passed by us last night!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

jeez, a bad year for florida this year! so many hurricanes

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At least Nicole was barely a category 1, just above a tropical storm. Amazingly, we didn't even lose power; a few blinks overnight, but no outright loss.

)

// how many maximum chunks in a batch
Expand Down Expand Up @@ -355,6 +356,10 @@ func (s *Syncer) handler(streamCtx context.Context, p p2p.Peer, stream p2p.Strea

// makeOffer tries to assemble an offer for a given requested interval.
func (s *Syncer) makeOffer(ctx context.Context, rn pb.GetRange) (o *pb.Offer, addrs []swarm.Address, err error) {

ctx, cancel := context.WithTimeout(ctx, makeOfferTimeout)
defer cancel()

chs, top, err := s.storage.IntervalChunks(ctx, uint8(rn.Bin), rn.From, rn.To, maxPage)
if err != nil {
return o, nil, err
Expand Down