Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transaction priority fee #541

Draft
wants to merge 4 commits into
base: master
Choose a base branch
from
Draft

Transaction priority fee #541

wants to merge 4 commits into from

Conversation

bowenwang1996
Copy link
Collaborator

Proposal to add transaction priority fee to the protocol

Copy link

render bot commented Apr 3, 2024

Copy link
Contributor

@jakmeier jakmeier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks pretty solid already. I left some thought in the comments as was reading through.

neps/nep-0541.md Outdated Show resolved Hide resolved
Comment on lines +88 to +92
while gas_used < gas_limit:
delayed_receipt_head = if delayed_receipts.empty() {-Inf} else {delayed_receipts.top() }
incoming_receipt_head = if incoming_receipts.empty() {-Inf} else {incoming_receipts.top() }
receipt = None
if delayed_receipt_head.priority > incoming_receipts_head.priority:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are prioritized delayed receipts executed before local receipts? What about prioritized local receipts?
Today's order among receipts is

  • local receipts
  • delayed receipts
  • new incoming receipts

Note that if we don't execute local receipts in the first chunk, they end up in the delayed receipts queue one chunk later.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes they are executed before local receipts. The way to think of it is as follows: priority execution always happens first and regular execution happens. During regular execution, we preserve the order of execution we do today (local receipts, delayed receipts, and incoming receipts). We can also change the order within regular execution, but that is orthogonal to this proposal.

neps/nep-0541.md Outdated Show resolved Hide resolved
neps/nep-0541.md Outdated Show resolved Hide resolved
neps/nep-0541.md Outdated

## Future possibilities

This NEP should be combined with [NEP-513](https://github.com/near/NEPs/pull/539). They together redefine how congestion is handled and how users can still send transactions during congestion. It is possible to explore more complex mechanism on priority fees when there is congestion. For example, the protocol could require that transactions to a congested shard must attach a priority fee and even place a minimum on the priority fee based on previous chunk's priority fees.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh and how are we dealing with priority in the outgoing buffers that NEP-539 currently proposes?

I would probably suggest that, at least initially, draining the outgoing buffers should follow a strict FIFO order. The priority already helped to get inside faster, so maybe the "cross-shard delay" can be fair.

Alternatively, we can add more priority queues (one extra for each receiving shard, per shard) and give truly fast speed even to cross-contract calls during congestion. It would certainly be a better user experience, since otherwise one can still wait a long time for a cross-contract call during congestion, no matter how much one pays.

I just feel perhaps an iterative approach would be better than changing so much all at once, without the experience of how congestion control works in practice.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah I think that makes sense

@walnut-the-cat
Copy link
Contributor

how do we determine 'priority fee'? Is the amount preset by protocol? Does a user have to guess and attach arbitrary amount, hoping it would be high enough?

Co-authored-by: Jakob Meier <jakob@near.org>
@bowenwang1996
Copy link
Collaborator Author

how do we determine 'priority fee'? Is the amount preset by protocol? Does a user have to guess and attach arbitrary amount, hoping it would be high enough?

It is an arbitrary amount decided by a user

@walnut-the-cat
Copy link
Contributor

how do we determine 'priority fee'? Is the amount preset by protocol? Does a user have to guess and attach arbitrary amount, hoping it would be high enough?

It is an arbitrary amount decided by a user

It's a good starting point, but it may not be intuitive enough. How can a user ensure that their txn will be 'prioritized' with the minimum amount of premium? Or is that out of our scope?

@birchmd
Copy link
Contributor

birchmd commented Apr 5, 2024

The protocol working group met today and had a lively discussion about this NEP along with #539

The primary concern raised with this proposal is if there are no protocol limits on priority fees then validators can collude to create an effective minimum fee (by censoring transactions that are below their desired fee). But we think this can be addressed by combining the idea of a priority fee with the congestion metrics present in #539 .

It would work something like this. Below a certain threshold the system is considered "not congested" and receipt priority is ignored. Beyond this threshold, the priority is used as the back-pressure mechanism by making the minimum required priority higher as the queue fills up. If a receipt does not have a high enough priority to be added to the target shard's incoming queue, then it remains in the outgoing queue of its source shard. If a new transaction does not have a high enough priority fee so that the priority of its initial receipt is high enough to be added to the shard's queue then it is rejected. To prevent receipts from being stuck forever, the priority of receipts in outgoing queues can increase over time so that eventually either the congestion will alleviate or the priority will be high enough that the target shard will accept it.

As part of this proposal, it was suggested that 100% of the priority fee be burned so that there is not an incentive for validators to artificially keep the system in a congested state.

cc @bowenwang1996 @mfornet @mm-near

@jakmeier
Copy link
Contributor

jakmeier commented Apr 8, 2024

@birchmd I am not quite sure I understand the concerns the protocol wg is trying to address.

validators can collude to create an effective minimum fee

If validators collude, the entire system is compromised, isn't it? Maybe I am missing what attacker model you assumed for your discussion.

As part of this proposal, it was suggested that 100% of the priority fee be burned so that there is not an incentive for validators to artificially keep the system in a congested state.

But then chunk producer have no incentive to actually include higher priority fee transactions over normal fee transactions they simply "like" more for one reason for another.

Keep in mind, chunk producer ultimately hold the power over which transactions are included on chain at all. If we don't give any of the priority fees to them, they can extract that value in other ways. For example, they could offer subscriptions to prioritize transactions from certain accounts and make extra profit this way. With the right pricing, this would also be cheaper for users, so it's a win-win situation for chunk proposers and transaction sender, at a loss for the protocol.

If we give rewards to the chunk producer, they are more incentivised to just follow the rules without off-chain shenanigans.

@birchmd
Copy link
Contributor

birchmd commented Apr 8, 2024

If validators collude, the entire system is compromised, isn't it?

The level of cooperation between validators in this case is lower than something extreme like including an incorrect state transition. It's more like price fixing in oligopoly situations. Each validator can notice the minimum priority fee being accepted by other validators and raise the minimum fee they accept to be in line. In this way the validators can come to an unspoken agreement to profit at users' expense.

chunk producer ultimately hold the power over which transactions are included on chain at all.

Yes, this is the whole idea with price fixing we are thinking about. The validators would simply censor incoming transactions lower than the minimum priority fee they have chosen; including in the case where such a fee is not needed to control congestion.

off-chain shenanigans

Yes, this also came up during our discussion. And I agree we do not want any kind of secondary market on blockspace. But at the same time, I think it is important to recognize that if validators profit off congestion then they will have an incentive to create congestion (perhaps even artificially, by sending the transactions themselves). Instead of giving no part of the priority fee to validators, another idea we had was to make the proportion of the priority fee the validator receives be a function with diminishing returns (eg log or sqrt) so that maybe there would be an optimal amount of congestion for validators. More detailed analysis of this is needed to see if it makes sense.

We also talked about how Ethereum faced this same issue with their gas price auction and they moved to a base-fee + tip model to make gas price fixing by validators less of a concern. We can't directly apply Ethereum's unsharded, synchronous execution setting directly to Near of course, but I think it does provide strong evidence that a pure auction is not the ideal model for users.

@mfornet
Copy link
Member

mfornet commented Apr 11, 2024

If validators collude, the entire system is compromised, isn't it?

To add to @birchmd's point, validators don't need to collude. They can raise the minimum accepted fee, and as long as enough of them have done that, submitters using a low fee will notice a lower throughput.

But then chunk producer have no incentive to actually include higher priority fee transactions over normal fee transactions they simply "like" more for one reason for another.

One of the ideas discussed is that the system will operate at a desired capacity (which could be half of the total capacity). To include a transaction that uses above half of the capacity, a fee needs to be attached. The required fee will increase exponentially as more transactions go beyond the desired capacity.

github-merge-queue bot pushed a commit to near/nearcore that referenced this pull request May 14, 2024
A first step towards near/NEPs#541 by
introducing a priority field in both transaction and receipt. This is
not entirely trivial due to the need to maintain backward compatibility.
This PR accomplishes backward compatibility by leveraging the account id
serialization and implement manual deserialization for the new
transaction and receipt structures. While this PR appears to be quite
large, most of the changes are trivial. The core of the changes are the
serialization/deserialization of transaction and receipt.

While this change introduces the new versions, they are prohibited from
being used in the current protocol until the introduction of the
protocol change that leverages priorities.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: NEW
Development

Successfully merging this pull request may close these issues.

None yet

5 participants