Skip to content

Client Side Flow Control model for the LES protocol

Domino Valdano edited this page Apr 4, 2018 · 7 revisions

Any node which takes on a server role inside the LES protocol needs to be able to somehow limit the amount of work it does for each client peer during a given time period. They can always just serve requests slowly if they are overloaded, but it would definitely be beneficial to give some sort of flow control feedback to the clients. This way, clients could (and would have incentive to) behave nicely and not send requests too quickly in the first place (and then possibly timeout and resend while the server is still working on them). They could also distribute requests better between multiple servers they are connected to. And if clients can do this, servers can expect them to do this and drop them instantly if they break the flow control rules.

The model

Let us assume that serving each request has a cost (depending on type and parameters) for the server. This cost is determined by the server, but it has an upper limit for any valid request. The server assigns a "buffer" for each client from which the cost of each request is deduced. The buffer has an upper limit and a recharge rate (cost per second). The server can decide to recharge it more quickly at any time if it has more free resources, but there is a guaranteed minimum recharge rate. If a request is received that would drain the client's buffer below zero, the client broke the flow control rules and it is instantly dropped.

The protocol

The server announces three parameters during handshake (RLP data types noted after each):

  • Buffer Limit (BL): P
  • Maximum Request Cost table (MRC): [[MsgCode: P, BaseCost: P, ReqCost: P], ...]
  • Minimum Rate of Recharge (MRR): P

It sets the Buffer Value (BV) of the client to BL. If a request is received from a client, it calculates the cost according to its own estimates (but not higher than MaxCost, which equals to BaseCost + ReqCost * N, where N is the number of individual elements asked in the request), then deducts it from BV. If BV goes negative, drops the peer, otherwise starts serving the request. The reply message contains a BV value that is the previously calculated BV plus the amount recharged during the time spent serving. Note that since the server can always determine any cost up to MaxCost for a request (and a client should not assume otherwise), it can drop a client without even processing the message if it receives one while BV < MaxCost because that's already a protocol breach.

The client always has a lowest estimate for its current BV, called BLE. It

  • sets BLE to BL at handshake
  • doesn't send any request to the server when BLE < MaxCost
  • deduces MaxCost when sending a request
  • recharges BLE at the rate of MRR when less than BL
  • when a reply message with a new BV value is received, it sets BLE to BV-Sum(MaxCost), summing the MaxCost values of requests sent after the one belonging to this reply.
Clone this wiki locally