Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Message templating, allowing multiple identities in one instance #188

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

Kubuxu
Copy link
Collaborator

@Kubuxu Kubuxu commented May 3, 2024

Removes ParitcipantID from gpbft; the signing is outside of gpbft, leading to internal messages being delivered "from the outside".

@Kubuxu Kubuxu marked this pull request as ready for review May 7, 2024 18:55
@Kubuxu Kubuxu requested a review from masih May 7, 2024 18:55
@masih
Copy link
Member

masih commented May 8, 2024

@Kubuxu can you resolve conflicts before I review please?

Kubuxu added 6 commits May 8, 2024 17:11
Paritcipant.ID is only for testing/debug, need to rip it out
Tests are incoming

Signed-off-by: Jakub Sztandera <oss@kubuxu.com>
Signed-off-by: Jakub Sztandera <oss@kubuxu.com>
Signed-off-by: Jakub Sztandera <oss@kubuxu.com>
Signed-off-by: Jakub Sztandera <oss@kubuxu.com>
Signed-off-by: Jakub Sztandera <oss@kubuxu.com>
Signed-off-by: Jakub Sztandera <oss@kubuxu.com>
@Kubuxu
Copy link
Collaborator Author

Kubuxu commented May 8, 2024

@masih rebased

if n.participants[p.ID()] != nil {
panic("duplicate participant ID")
}
n.participantIDs = append(n.participantIDs, p.ID())
n.participants[p.ID()] = p
}

////// Network interface
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Subjective: could we avoid these comments going forward, please? They seem redundant considering modern IDE capabilities.

Suggested change
////// Network interface

}
}

type NetworkFor struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason for exporting this struct?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any reason not to? It is a wrapper onto Network which is also exported.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My vote and the common convention is to not export unless needed. This is not a blocker. Please don't let me slow you down. We can refactor it later.

sim/sim.go Show resolved Hide resolved
@@ -23,13 +23,20 @@ type simHost struct {
id gpbft.ActorID
}

type SimNetwork interface {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we unify SimNetwork, simHost and NetworkFor into one struct(simHost)?

I am curious why we want all three as separate as they are.

gpbft/messagetemplate.go Outdated Show resolved Hide resolved
Comment on lines 89 to 96
gmsg := &GMessage{
Sender: st.ParticipantID,
Vote: st.Payload,
Signature: payloadSignature,
Ticket: vrf,
Justification: st.Justification,
}
return gmsg
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
gmsg := &GMessage{
Sender: st.ParticipantID,
Vote: st.Payload,
Signature: payloadSignature,
Ticket: vrf,
Justification: st.Justification,
}
return gmsg
return &GMessage{
Sender: st.ParticipantID,
Vote: st.Payload,
Signature: payloadSignature,
Ticket: vrf,
Justification: st.Justification,
}

Power: big.NewInt(1),
},
}...)
assert.NoError(t, err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a fatal error; use require.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I had require and assert swapped around.

//if err != nil {
//i.log("error while signing message: %v", err)
//return
//}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clean up commented out code?

gpbft/api.go Outdated Show resolved Hide resolved
return signingTemplate, nil
}

// Sign creates signatures for the SigningTemplate, it could live across RPC boundry
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Sign creates signatures for the SigningTemplate, it could live across RPC boundry
// Sign creates signatures for the SigningTemplate, it could live across RPC boundary.

Copy link
Member

@anorth anorth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The big items:

  • [BLOCKING] Logs must have participant IDs
  • [BLOCKING] Now-redundant local message inbox should be deleted
  • There's lots of plumbing of values from outside through gpbft and then back to outside as part of the message template. The host already has these things, can we avoid plumbing them and just look up by instance ID?

The change to async delivery to self is a notable enough change that I would suggest doing it in an independent PR, before removing the participant ID. There's a bunch of other pre-factor work in the sim/network which could probably be pulled out ahead of the actual message templates, too.

sim/network.go Outdated
func (n *Network) NetworkName() gpbft.NetworkName {
return n.networkName
}

func (n *Network) Broadcast(msg *gpbft.GMessage) {
n.log(TraceSent, "P%d ↗ %v", msg.Sender, msg)
for _, dest := range n.participantIDs {
// FIXME: something in gpbft assumes that we will immediately recieve message from ourselves
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it needs that, so needs to be enabled somehow

// Sends a message to all other participants.
// The message's sender must be one that the network interface can sign on behalf of.
Broadcast(msg *GMessage)
// Requests that the message is signed and broadcasted, it should also be delivered locally
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see you have removed the synchronous self-delivery in broadcast, because the participant can't attach its own ID to the message being sent.

This change from sync to async is probably fine, I don't think anything relies on the synchronicity. However, there's now a bunch of code about a local message queue that is redundant and should be removed.

Ticket: ticket,
messageTemplate := MessageTemplate{
powerTable: &i.powerTable,
NetworkName: i.participant.host.NetworkName(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This name is fetched from the host just to be passed straight back out. Remove NetworkName from the host API and just attach it when building messages from templates.

})
}
}

func (n *Network) NetworkName() gpbft.NetworkName {
return n.networkName
}

func (n *Network) Broadcast(msg *gpbft.GMessage) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please move this now next to Broadcast as part of the adversary interface only.

deliverAt: n.clock.Add(latencySample),
})
if dest == msg.Sender {
continue
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I can tell this is functionally equivalent. If that's right, I think the previous approach without unstructured control flow is marginally better. Non-blocking.


// Expect message is signed with expected payload.
pt.host.On("Sign", pt.pubKey, pt.matchMessageSigningPayload()).Return(wantSignature, nil)

// Expect a broadcast occurs with quality phase message, and the expected chain, signature.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Expect a broadcast occurs with quality phase message, and the expected chain, signature.
// Expect a broadcast occurs with quality phase message, and the expected chain.

)

type MessageTemplate struct {
powerTable powerTableAccess
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels a bit clumsy to me, though I see how it was reached from the current code. The power table came from the host in the first place, and we're just passing it back here. This field could instead be just an instance ID (if the host doesn't already know this), and then the host could pass the right power table to Build.

This is a symptom of another awkwardness, that the signing key is looked up by public key rather than by participant ID. Previously I thought this was not worth changing, but it may be contributing here.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The point is that the MessageTemplate is exposed to whoever integrates it (Lotus), which can create the signing payload that can cross RPC boundaries to be signed and given back.
I will document it more, does that change your CR?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a good enough view into how this will be used in the integration, so I won't hold you up on it. I do feel like we can do better, but don't know enough to say exactly how.

type MessageTemplate struct {
powerTable powerTableAccess

NetworkName NetworkName
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto the host should know this, no need to plumb it through GPBFT instance.

NetworkName NetworkName
Payload Payload

BeaconForTicket []byte
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This came from the host too. Maybe we can avoid plumbing this through too. Should be able to look it up by instance. (The sim host probably doesn't have this mapping yet).

@@ -842,16 +835,11 @@ func (i *instance) buildJustification(quorum QuorumResult, round uint64, phase P
func (i *instance) log(format string, args ...interface{}) {
if i.tracer != nil {
msg := fmt.Sprintf(format, args...)
i.tracer.Log("P%d{%d}: %s (round %d, step %s, proposal %s, value %s)", i.participant.id, i.instanceID, msg,
i.tracer.Log("{%d}: %s (round %d, step %s, proposal %s, value %s)", i.instanceID, msg,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Blocking: participant IDs in the logs are critical to development and debugging of the protocol itself. Please put prefix a participant ID inside the Log call.

Please enable tracing on one of the current unit tests, check out the logs, and make sure we can replicate something similar after this change.

Copy link
Collaborator Author

@Kubuxu Kubuxu May 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Signed-off-by: Jakub Sztandera <oss@kubuxu.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Location of participants in Lotus stack, multiple identities in one instance
3 participants